Robiel H. Statistics For Management
Robiel H. Statistics For Management
Robiel H. Statistics For Management
1
Statistics for management
For example if the experiment consists of tossing a coin the experimental outcomes
are “head” and “tail”. We often wish to assign probabilities to experimental
outcomes. This can be done by several methods.
When all of the experimental outcomes are equally likely, we can use logic to assign
probabilities. This method is called classical method.
Sometimes it is difficult or impossible to use the classical method to assign
probabilities, since we can often make a relative frequency of probability.
Example: We might perform the, say 1000 times by surveying,1000 randomly
selected consumers, then if 200 of those surveyed said that they prefer coca cola, we
would estimate the probability that a randomly selected consumer prefers coca cola
to all other soft drinks to be 200/1000=0.2. This is called the relative frequency
method for assigning probability. When we use experience, intuitive, judgment, or
experience to asses a probability, we call this a subjective.
SAMPLE SPACE AND EVENTS
In order to calculate probabilities by using the classical method, it is important to
understand and use a sample space. The sample space of an experiment is the set of
all possible outcomes. The experimental outcomes in the sample space are often
called sample space outcomes. Example: if we toss a coin our sample space is Head
and Tail i.e. which consists of the two possible experimental outcomes. A newly
married couple plans to have two children (at different times). If B denote a bout
and G=girl denote. The sample space that is the set of all possible outcomes is four.
I.e. BB, BG GB, and GG.
In order to consider the probabilities of these outcomes, suppose that boys and girls
are equally likely each time a child is born. Institutively, this says that each of the
sample space outcomes is equally. That is this implies that
P (BB) =P (BB) = P (BB) =P (BB) =0.25.
An event is a set (collection) of sample space outcomes. For instance, if we consider
the couple planning to have two children, the event “the couple will have at least one
girl” consists of the sample space outcomes BG, GB, and GG. I.e. the event “the
couple will have at least one girl” will occur if and only if one of the sample
outcomes BG, GB, and GG occurs.
The probability of an event is the sum of the probabilities of the sample space
outcomes that correspond to the event. An event is any subject of the sample space.
If an event is denoted by A, then 0≤P (A) ≤1. If an event is certain to occur, then the
probability of this event equals 1.
In general, when a sample space is infinite we can use the following method for
computing the probability of an event. If all of the sample space outcomes are
equally likely, then the probability that an event will occur is equally to the ratio.
Robiel H. 3
Statistics for management
C. P (W) =
n(W ) 60
= 200 = 0.0.3 P (W) +P (R) =1 complementary
n
n(R) 140
D. P (R) = = 200 = 0.70
n
Joint probability
Marginal probability
3. Joint probability
Is denoted by P (A∩B)
Measures the probability of occurrence of to or more events
simultaneously.
It measures the probability that an item posses two or more specified
characteristics.
The sum of joint probabilities must be 1:00
n(D ∩W ) 36
P (D∩W)= = 200 =0.18
n
n(R ∩W )
P (R∩W) = = 0 because they are mitually ex clusive .
n
4. Conditional probability
Robiel H. 6
Statistics for management
n (S ∩ F )
P (S ∩ F ) n n( S ∩ F) 450
A. P (S¿F)= P ( F) = n( F ) = n( F) = 1,800 =0.25
n
n (S ∩ F )
P (S ∩ F ) n n( S ∩ F) 450
B. P (F¿S)= P(S ) = n(S ) = n(S) = 800 =0.5625
n
BAYE’S THEOREM
Robiel H. 7
Statistics for management
Was developed around 1837 by an English clergy man Thomas Bayes (1702-
1761)
Sometimes we have an initial or prior probability that an event will occur.
Then, based on new information, we revise the prior probability to what is
called posterior probability.
It simplifies the computation of P (A¿B) when P( A ∩ B) and P (B) are not
directly given.
It helps us to revise probabilities when new information is obtained.
P ( X i ∩Y )
P ( X i /¿Y) = P (Y ) ; P(Y )≠ 0
P(Y / X i) P(X i )
P ( X i /¿Y) = P X ∩Y + P X ∩Y
( 1 ) ( 2 )
P(Y / X i ) P( X i)
P ( X i /¿Y) = P (Y / X ) P( X )+ P(Y / X ) P (X )
1 1 2 2
P ( X i ∩Y )
P ( X i /¿Y) = P (Y ) ; P ( X i ∩Y )=P (Y / X i ) P( X i)=P (X i /Y )P(Y )
P (Y / X i )P( X i) P(Y / X i) P(X i )
P ( X i /¿Y) = = P ( X 1 ∩Y ) + P ( X 2 ∩Y ) P ( X 3 ∩Y )
P(Y )
P (Y / X i )P( X i) P(Y / X i) P(X i )
P ( X i /¿Y) = = P (Y / X 1 ) P( X 1)+ P(Y / X 2) P (X 2 )+ P (Y / X 3 ) P( X 3)
P(Y )
Robiel H. 8
Statistics for management
B. Compute the joint probabilities for each event and the new information for
each event.
C. Sum the joint probabilities for each event and the new information by using
multiplication-column four.
D. In column five compute the posterior probabilities using the basic
relationships of conditional probabilities.
Exercise: Find
P (A¿Y) =?
PROBABILITY DISTRIBUTION
BASIC CONCEPTS:
P (B¿Y) =? P (C¿Y) =?
Random variable: a variable that assumes numerical values that are determined by
the outcome of an experiment, where one and only one numerical value is assigned
to each experimental outcome. Whose values are determined by chance.
There fore an experiment is carried out, if its outcome is uncertain.
There are two types of random variables:
A. Discrete random variable: a random variable in which its possible values ca n
be counted or listed. It may assume a finite number of possible values or the
possible values may take the form of a countable sequence or list as 0,1,2,3---
(countable infinite list).
Robiel H. 11
Statistics for management
defective parts are possible in that sample. In a sample of five parts, getting 2.714
defective parts is neither possible, nor getting eight defective parts possible.
If the population is large in comparison with the sample size, the effect of
sampling without replacement is minimal, and the independence assumption
essentially met. That is, p remains relatively constant. If the sample size n is less
than 55 of the population, the independence assumption is not of great concern.
Therefore the acceptable sample size for using the binomial distribution with
samples taken without replacement is n=≤5%N.
2.2Poisson distribution
It is another discrete distribution. The binomial distribution describes a distribution
of two possible outcomes designated as success and failures from a given number of
trial, however the Poisson distribution focuses only on the number of discrete
occurrences over some interval or continuum.
In short a Poisson distribution is a discrete distribution i.e. constructed from
the probability of occurrence of rare events over an interval.
Poisson formula:
λ x ℮− λ
P(X) =
x!
Where:
X=0, 1, 2, 3 -------= the number of occurrence per interval for which the
probability is being computed.
λ =long run average
℮=2.71828=base of natural logarithm.
Example: Suppose bank customers arrive randomly on weekday afternoons at an
average of 3.2 customers every 4 minutes. What is the probability of exactly
five customers in a four minutes interval on weekday afternoon?
Solution:
X=5
λ =3.2/4 minite
℮=2.71828
Robiel H. 14
Statistics for management
μ=494 X =600
Z1.06
Robiel H. 17
Statistics for management
0.355
4
δ =100
0.0197
μ=494 X =700
Z2.06
We know that the total area of the curve is 1.00 i.e. the first half 0.50 and the 2nd
half 0.50.
∴ 0.5000-0.4803=0.0197
Probability of X greater than700=The probability of X greater than the mean – probability of X between the mean∧700
C. P(X ≤ 550)=?
μ=494 X−μ 550−494
Z= δ = 100 =0.56
δ =100
Z 0.56=0.2123
δ =100
0.7123
μ=494 X =550
Z 0.56
Robiel H. 18
Statistics for management
0.212
3
P(X ≤ 550)=0.5000+0.2123
=0.7123