UNIT1
UNIT1
(19A05502T)
UNIT – I
(OR)
o With the help of AI, you can create such software or devices which
can solve real-world problems very easily and with accuracy such
o With the help of AI, you can create your personal virtual Assistant,
o With the help of AI, you can build such Robots which can work
o AI opens a path for other new technologies, new devices, and new
Opportunities.
GOALS OF ARTIFICIAL INTELLIGENCE:
imagine some new ideas but still AI machines cannot beat this
imaginative.
APPLICATIONS OF AI:
2.AI in Healthcare
4.AI in Finance
9.AI in Robotics:
o Artificial Intelligence has a remarkable role in Robotics.
Usually, general robots are programmed such that they
can perform some repetitive task, but with the help of AI,
we can create intelligent robots which can perform tasks
with their own experiences without pre-programmed.
o Humanoid Robots are best examples for AI in robotics,
recently the intelligent Humanoid robot named as Erica
and Sophia has been developed which can talk and behave
like humans.
10.AI in Entertainment
o We are currently using some AI based applications in our daily
life with some entertainment services such as Netflix or Amazon.
With the help of ML/AI algorithms, these services show the
recommendations for programs or shows.
11.AI in Agriculture
o Agriculture is an area which requires various resources, labor,
money, and time for best result. Now a day's agriculture is
becoming digital, and AI is emerging in this field. Agriculture is
applying AI as agriculture robotics, solid and crop monitoring,
predictive analysis. AI in agriculture can be very helpful for
farmers.
12.AI in E-commerce
o AI is providing a competitive edge to the e-commerce industry,
and it is becoming more demanding in the e-commerce business.
AI is helping shoppers to discover associated products with
recommended size, color, or even brand.
13.AI in education:
o AI can automate grading so that the tutor can have more time to
teach. AI chatbot can communicate with students as a teaching
assistant.
o AI in the future can be work as a personal virtual tutor for
students, which will be accessible easily at any time and any
place.
FOUNDATIONS OF AI:
o The duration between years 1974 to 1980 was the first AI winter
duration. AI winter refers to the time period where computer
scientist dealt with a severe shortage of funding from government
for AI researches.
o During AI winters, an interest of publicity on artificial intelligence
was decreased.
A boom of AI (1980-1987)
o The duration between the years 1987 to 1993 was the second AI
Winter duration.
o Again Investors and government stopped in funding for AI research
as due to high cost but not efficient result. The expert system such
as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion,
Gary Kasparov, and became the first computer to beat a world chess
champion.
o Year 2002: for the first time, AI entered the home in the form of
Roomba, a vacuum cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies
like Facebook, Twitter, and Netflix also started using AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show,
where it had to solve the complex questions as well as riddles. Watson had
proved that it could understand natural language and can solve tricky
questions quickly.
o Year 2012: Google has launched an Android app feature
"Google now", which was able to provide information to
the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman"
won a competition in the infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on
complex topics with two master debaters and also
performed extremely well.
o Google has demonstrated an AI program "Duplex" which
was a virtual assistant and which had taken hairdresser
appointment on call and lady on other side didn't notice that
she was talking with the machine.
2.Limited Memory
o Limited memory machines can store past experiences or some
data for a short period of time.
o These machines can use stored data for a limited time period
only.
o Self-driving cars are one of the best examples of Limited
Memory systems. These cars can store recent speed of nearby
cars, the distance of other cars, speed limit, and other
information to navigate the road.
3.Theory of Mind
o Theory of Mind AI should understand the human emotions,
people, beliefs, and be able to interact socially like humans.
These types of AI machines are still not developed, but
researchers are making lots of efforts and
improvement for developing such AI machines.
4.Self-Awareness
o Self-awareness AI is the future of Artificial
Intelligence. These machines will be super intelligent,
and will have their own consciousness, sentiments, and
self-awareness.
o These machines will be smarter than human mind.
o Self-Awareness AI does not exist in reality still and it is
a hypothetical concept.
INTELLIGENT AGENTS:
An AI system can be defined as the study of the rational agent and its
environment. The agents sense the environment through sensors and
act on their environment through actuators. An AI agent can have
mental properties such as knowledge, belief, intention, etc.
What is an Agent?
An agent can be anything that perceive its environment through
sensors and act upon that environment through actuators. An Agent
runs in the cycle of perceiving, thinking, and acting. An agent can
be:
Following are the main three terms involved in the structure of an AI agent:
f:P* → A
Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function f.
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made up
of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Following are the main three terms involved in the structure of an AI agent:
f:P* → A
Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function f.
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made up
of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Features of Environment:
As per Russell and Norvig, an environment can have various features from the point of view of
an agent:
10.Deterministic vs Stochastic:
AGENTS AND ENVIRONMENTS:
An environment is everything in the world which surrounds the agent, but it is not a part
of an agent itself. An environment can be described as a situation in which an agent is
present.
The environment is where agent lives, operate and provide the agent with something to
sense and act upon it. An environment is mostly said to be non-feministic.
Features of Environment:
As per Russell and Norvig, an environment can have various features from the point of view of
an agent:
10.Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next state of
the environment, then such environment is called a deterministic environment.
o A stochastic environment is random in nature and cannot be determined completely by an
agent.
o In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single agent
environment.
o If an agent's current state and selected action can completely determine the next state of
the environment, then such environment is called a deterministic environment.
o A stochastic environment is random in nature and cannot be determined completely by an
agent.
o In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single agent
environment.
5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such environment
is called a dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the world at each
action.
SREC 50
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.
6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it is
called continuous environment.
o A chess game comes under discrete environment as there is a finite number of moves that
can be performed.
o A self-driving car is an example of a continuous environment.
7. Known vs Unknown
o Known and unknown is not actually a feature of an environment, but it is an agent's state
of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.
o It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.
8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called
inaccessible.
o An empty room whose state can be defined by its temperature is an example of an
accessible environment.
o Information about an event on earth is an example of Inaccessible environment.
Good Behavior: The Concept of Rationality:
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in a
way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents to
use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward and for
each wrong action, an agent gets a negative reward.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:
***************
Types of AI Agents:
Agents can be grouped into five classes based on their degree of perceived intelligence
and capability. All these agents can improve their performance and generate better action
over the time. These are given below:
3. Goal-based agents:
o The knowledge of the current state environment is not always sufficient to decide
for an agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
Fig: Goal-based agent
4. Utility-based agents:
o These agents are similar to the goal-based agent but provide an extra component
of utility measurement which makes them different by providing a measure of
success at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the
goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
Fig: Utility-based agent
5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
o A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning
from environment
b. Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.