CC-13: Artificial Intelligence (UNIT-4) Dealing With Uncertainty and Inconsistencies

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

CC–13 : Artificial Intelligence (UNIT–4)

Dealing with Uncertainty and Inconsistencies: Till now, we have learnt


knowledge representation using first-order logic and propositional logic with
certainty, which means we were sure about the predicates. With this knowledge
representation, we might write A→B, which means if A is true then B is true, but
consider a situation where we are not sure about whether A is true or not then we
cannot express this statement, this situation is called uncertainty.
So to represent uncertain knowledge, where we are not sure about the predicates,
we need uncertain reasoning or probabilistic reasoning.
Causes of uncertainty: Following are some leading causes of uncertainty to occur
in the real world.
1. Information occurred from unreliable sources.
2. Experimental Errors
3. Equipment fault
4. Temperature variation
5. Climate change.
Truth maintenance system: A truth maintenance system, or TMS, is
a knowledge representation method for representing both beliefs and their
dependencies and an algorithm called the "truth maintenance algorithm" that
manipulates and maintains the dependencies.
A truth maintenance system maintains consistency between old believed knowledge
and current believed knowledge in the knowledge base (KB) through revision. If the
current believed statements contradict the knowledge in the KB, then the KB is
updated with the new knowledge. It may happen that the same data will again be
believed, and the previous knowledge will be required in the KB.
Each statement having at least one valid justification is made a part of the current
belief set. When a contradiction is found, the statement(s) responsible for the
contradiction are identified and the records are appropriately updated. This process
is called dependency-directed backtracking.
The TMS algorithm maintains the records in the form of a dependency network.
Each node in the network is an entry in the KB (a premise, antecedent, or inference
rule etc.) Each arc of the network represents the inference steps through which the
node was derived.
A premise is a fundamental belief which is assumed to be true. They do not need
justifications. The set of premises are the basis from which justifications for all other
nodes will be derived.
Reasoning: The reasoning is the mental process of deriving logical conclusion
and making predictions from available knowledge, facts, and beliefs. Or we can say,
"Reasoning is a way to infer facts from existing data." It is a general process of
thinking rationally, to find valid conclusions. In artificial intelligence, reasoning is
essential so that the machine can also think rationally as a human brain, and can
perform like a human.

M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 1 of 9


Types of Reasoning: In artificial intelligence, reasoning can be divided into the
following categories:
 Deductive reasoning
 Inductive reasoning
 Abductive reasoning
 Common Sense Reasoning
 Monotonic Reasoning
 Non-monotonic Reasoning
1. Deductive reasoning: Deductive reasoning is deducing new information from
logically related known information. It is the form of valid reasoning, which means
the argument's conclusion must be true when the premises are true. It is sometimes
referred to as top-down reasoning.
Example: Premise-1: All the humans eat veggies
Premise-2: Suresh is human.
Conclusion: Suresh eats veggies.
The general process of deductive reasoning is given below:

Theory Hypothesis Patterns Confirmation

2. Inductive Reasoning: Inductive reasoning is a form of reasoning to arrive at a


conclusion using limited sets of facts by the process of generalization. It starts with
the series of specific facts or data and reaches to a general statement or conclusion.
Inductive reasoning is a type of propositional logic, which is also known as cause-
effect reasoning or bottom-up reasoning.
In inductive reasoning, we use historical data or various premises to generate a
generic rule, for which premises support the conclusion. Premises provide probable
supports to the conclusion, so the truth of premises does not guarantee the truth of
the conclusion.
Example: Premise: All of the pigeons we have seen in the zoo are white.
Conclusion: Therefore, we can expect all the pigeons to be white.

Observations patterns Hypothesis Theory

3. Abductive reasoning: Abductive reasoning is a form of logical reasoning which


starts with single or multiple observations then seeks to find the most likely
explanation or conclusion for the observation. It is an extension of deductive
reasoning, but in abductive reasoning, the premises do not guarantee the
conclusion.
Example: Implication: Cricket ground is wet if it is raining
Axiom: Cricket ground is wet.
Conclusion: It is raining.
4. Common Sense Reasoning: Common sense reasoning is an informal form of
reasoning, which can be gained through experiences. It simulates the human ability
to make presumptions about events which occurs on every day. It relies on good
M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 2 of 9
judgment rather than exact logic and operates on heuristic knowledge and heuristic
rules.
Example: 1. One person can be at one place at a time.
2. If I put my hand in a fire, then it will burn.
The above two statements are the examples of common sense reasoning which a
human mind can easily understand and assume.
5. Monotonic Reasoning: In monotonic reasoning, once the conclusion is taken,
then it will remain the same even if we add some other information to existing
information in our knowledge base.
Example: Earth revolves around the Sun.
It is a truth, and it cannot be changed even if we add another sentence in knowledge
base like, "The moon revolves around the earth" etc.
Advantages of Monotonic Reasoning:
 In monotonic reasoning, each old proof will always remain valid.
 If we deduce some facts from available facts, then it will remain valid for
always.
Disadvantages of Monotonic Reasoning:
 We cannot represent the real world scenarios using Monotonic reasoning.
 Hypothesis knowledge cannot be expressed with monotonic reasoning, which
means facts should be true.
 Since we can only derive conclusions from the old proofs, so new knowledge
from the real world cannot be added.
6. Non-monotonic Reasoning: In Non-monotonic reasoning, some conclusions
may be invalidated if we add some more information to our knowledge base. Logic
will be said as non-monotonic if some conclusions can be invalidated by adding
more knowledge into our knowledge base. Non-monotonic reasoning deals with
incomplete and uncertain models. Default reasoning is a form of non-
monotonic reasoning where possible conclusions are inferred based on general
rules which may have exceptions (defaults).
Example: Let suppose the knowledge base contains the following knowledge:
 Birds can fly
 Penguins cannot fly
 Pitty is a bird
So from the above sentences, we can conclude that Pitty can fly.
However, if we add one another sentence into knowledge base "Pitty is a
penguin", which concludes "Pitty cannot fly", so it invalidates the above
conclusion.
Advantages of Non-monotonic reasoning:
 For real-world systems such as Robot navigation, we can use non-monotonic
reasoning.
 In Non-monotonic reasoning, we can choose probabilistic facts or can make
assumptions.

M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 3 of 9


Disadvantages of Non-monotonic Reasoning:
 In non-monotonic reasoning, the old facts may be invalidated by adding new
sentences.
 It cannot be used for theorem proving.

Difference between Inductive and Deductive reasoning:


Basis of
Deductive Reasoning Inductive Reasoning
comparison

Deductive reasoning is the


Inductive reasoning arrives at a
form of valid reasoning, to
conclusion by the process of
Definition deduce new information or
generalization using specific facts
conclusion from known related
or data.
facts and information.
Deductive reasoning follows a Inductive reasoning follows a
Approach
top-down approach. bottom-up approach.
Deductive reasoning starts Inductive reasoning starts from the
Starts from
from Premises. Conclusion.
In deductive reasoning In inductive reasoning, the truth of
Validity conclusion must be true if the premises does not guarantee the
premises are true. truth of conclusions.
Use of inductive reasoning is fast
Use of deductive reasoning is
and easy, as we need evidence
Usage difficult, as we need facts
instead of true facts. We often use it
which must be true.
in our daily life.
Theory→hypothesis→ Observations→patterns→
Process
patterns→confirmation hypothesis→Theory.

Probabilistic reasoning: Probabilistic reasoning is a way of knowledge


representation where we apply the concept of probability to indicate the uncertainty
in knowledge. In probabilistic reasoning, we combine probability theory with logic to
handle the uncertainty.
In the real world, there are lots of scenarios, where the certainty of something is not
confirmed, such as "It will rain today," "behaviour of someone for some situations,"
"A match between two teams or two players." These are probable sentences for
which we can assume that it will happen but not sure about it, so here we use
probabilistic reasoning.
Need of probabilistic reasoning in AI:
 When there are unpredictable outcomes.
 When specifications or possibilities of predicates becomes too large to handle.
 When an unknown error occurs during an experiment.

In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:

M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 4 of 9


 Bayes' rule
 Bayesian Statistics
As probabilistic reasoning uses probability and related terms, so before
understanding probabilistic reasoning, let's understand some common terms:
Probability: Probability can be defined as a chance that an uncertain event will
occur. It is the numerical measure of the likelihood that an event will occur. The
value of probability always remains between 0 and 1 that represent ideal
uncertainties.
0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.
P(A) = 0, indicates total uncertainty in an event A.
P(A) = 1, indicates total certainty in an event A.
We can find the probability of an uncertain event by using the below formula.
Number of desired outcomes
Probability of occurrence =
Total number of outcomes
 P(¬A) = probability of a not happening event.
 P(¬A) + P(A) = 1.
Event: Each possible outcome of a variable is called an event.
Sample space: The collection of all possible events is called sample space.
Random variables: Random variables are used to represent the events and objects
in the real world.
Prior probability: The prior probability of an event is probability computed before
observing new information.
Posterior Probability: The probability that is calculated after all evidence or
information has taken into account. It is a combination of prior probability and new
information.
Conditional probability: Conditional probability is a probability of occurrence of an
event when another event has already happened.
Let's suppose, we want to calculate the event A when event B has already occurred,
"the probability of A under the conditions of B", it can be written as:
P(A⋀B) Where P(A⋀B) = Joint probability of A and B
P(A|B) =
P(B) P(B) = Marginal probability of B.

It can be explained by using the Venn diagram,


where B is occurred event, so sample space will be
reduced to set B, and now we can only calculate
event A when event B is already occurred by dividing
the probability of P(A⋀B) by P( B ).

M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 5 of 9


Example 1: In a group of 100 sports car buyers, 40 bought alarm systems, 30
purchased bucket seats, and 20 purchased an alarm system and bucket seats. If a
car buyer chosen at random bought an alarm system, what is the probability they
also bought bucket seats?
Solution: Let, A is an event that the buyer bought a bucket seat.
B is an event that the buyer bought an alarm system car.
P(A⋀B) 0.2
P(A|B) = = = 50%
P(B) 0.4
The probability that a buyer bought bucket seats, given that they purchased an
alarm system, is 50%.
Example 2: In a class, there are 70% of the students like English and 40% of the
students like both English and mathematics. What is the percent of students those
who like English also like Mathematics?
Solution: Let, A is an event that a student likes Mathematics
B is an event that a student likes English.
P(A⋀B) 0.4
P(A|B) = = = 57%
P(B) 0.7
Hence, 57% are the students who like English also like Mathematics.
Bayesian Probabilistic Inference:
P(B|A) P(A)
Bayes' theorem: P(A|B) =
P(B)
It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).
Proof: As we know from conditional probability-
P(A⋀B)
P(A|B) = -----------------------eqn(1)
P(B)
P(A⋀B)
Similarly, P(B|A) =
P(A)
 P(A⋀B) = P(B|A) P(A) ---------------eqn(2)
Putting the value of P(A⋀B) from equation eqn(2) in eqn(1), we get:
P(B|A) P(A)
P(A|B) = (Proved)
P(B)
P(A|B) is known as posterior, which we need to calculate, and it will be read as
Probability of hypothesis A when we have occurred an evidence B.
P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we
calculate the probability of evidence.
P(A) is called the prior probability, probability of hypothesis before considering the
evidence
P(B) is called marginal probability, pure probability of an evidence.
The Bayesian inference is an application of Bayes' theorem, which is fundamental
to Bayesian statistics.
M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 6 of 9
Applying Bayes' rule: Bayes' rule allows us to compute the single term P(B|A) in
terms of P(A|B), P(B), and P(A). This is very useful in cases where we have a good
probability of these three terms and want to determine the fourth one. Suppose we
want to perceive the effect of some unknown cause, and want to compute that
cause, then the Bayes' rule becomes:

P(effect | cause) P(cause)


P(cause | effect) =
P(effect)
Example-1: What is the probability that a patient has diseases meningitis with
a stiff neck?
Given Data: A doctor is aware that disease meningitis causes a patient to have a
stiff neck, and it occurs 80% of the time. He is also aware of some more facts, which
are given as follows:
The Known probability that a patient has meningitis disease is 1/30,000.
The Known probability that a patient has a stiff neck is 2%.
Let A be the proposition that patient has stiff neck and B be the proposition that
patient has meningitis. , so we can calculate the following as:
P(A|B) = 0.8
P(B) = 1/30000
P(A)= 0.02
P(A|B) P(B) 0.8 x 1/30000
P(B|A) = = = 0.00133333
P(A) 0.02
Hence, we can assume that 1 patient out of 750 patients has meningitis disease
with a stiff neck.
Basics of NLP: NLP stands for Natural Language Processing, which is a part
of Computer Science, Human language, and Artificial Intelligence. It is the
technology that is used by machines to understand, analyse, manipulate, and
interpret human's languages. It helps developers to organize knowledge for
performing tasks such as translation, automatic summarization, Named Entity
Recognition (NER), speech recognition, relationship extraction, and topic
segmentation.
Advantages of NLP:
 NLP helps users to ask questions about any subject and get a direct response
within seconds.
 NLP offers exact answers to the question means it does not offer unnecessary
and unwanted information.
 NLP helps computers to communicate with humans in their languages.
 It is very time efficient.
 Most of the companies use NLP to improve the efficiency of documentation
processes, accuracy of documentation, and identify the information from large
databases.

M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 7 of 9


Disadvantages of NLP:
 NLP is unpredictable
 NLP may require more keystrokes.
 NLP is unable to adapt to the new domain, and it has a limited function that's
why NLP is built for a single and specific task only.
Components of NLP:
1. Natural Language Understanding (NLU): Natural Language Understanding
(NLU) helps the machine to understand and analyse human language by extracting
the metadata from content such as concepts, entities, keywords, emotion, relations,
and semantic roles. NLU mainly used in Business applications to understand the
customer's problem in both spoken and written language.
2. Natural Language Generation (NLG): Natural Language Generation (NLG) acts
as a translator that converts the computerized data into natural language
representation. It mainly involves Text planning, Sentence planning, and Text
Realization.
Difference between NLU and NLG

NLU NLG
NLU is the process of reading and NLG is the process of writing or generating
interpreting language. language.
It produces non-linguistic outputs It produces constructing natural language
from natural language inputs. outputs from non-linguistic inputs.
Applications of NLP:
1. Question Answering: Question Answering focuses on building systems that
automatically answer the questions asked by humans in a natural language.
2. Spam Detection: Spam detection is used to detect unwanted e-mails getting to a
user's inbox.
3. Sentiment Analysis: Sentiment Analysis is also known as opinion mining. It is
used on the web to analyse the attitude, behaviour, and emotional state of the
sender. This application is implemented through a combination of NLP (Natural
Language Processing) and statistics by assigning the values to the text (positive,
negative, or natural), identify the mood of the context (happy, sad, angry, etc.)
4. Machine Translation: Machine translation is used to translate text or speech
from one natural language to another natural language.
Example: Google Translator
5. Spelling correction: Microsoft Corporation provides word processor software like
MS-word, PowerPoint for the spelling correction.
6. Speech Recognition: Speech recognition is used for converting spoken words
into text. It is used in applications, such as mobile, home automation, video
recovery, dictating to Microsoft Word, voice biometrics, voice user interface, and so
on.
M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 8 of 9
7. Chatbot: Implementing the Chatbot is one of the important applications of NLP. It
is used by many companies to provide the customer's chat services.
8. Information extraction: Information extraction is one of the most important
applications of NLP. It is used for extracting structured information from unstructured
or semi-structured machine-readable documents.
9. Natural Language Understanding (NLU): It converts a large set of text into
more formal representations such as first-order logic structures that are easier for
the computer programs to manipulate notations of the natural language processing.

M K Mishra, Asst. Prof. of Comp. Sc., FMAC, Bls. Page 9 of 9

You might also like