Extropy Estimation of Weibull Distribution Under Upper Records

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

The Journal of Engineering and Exact Sciences – jCEC, Vol. 09 N.

12 (2023)
journal homepage: https://periodicos.ufv.br/jcec
eISSN: 2527-1075
ISSN: 2446-9416
Extropy estimation of Weibull distribution under upper records
Article Info:
Article history: Received 2023-11-05 / Accepted 2023-12-12 / Available online 2023-12-12
doi: 10.18540/jcecvl9iss12pp19490

Nour El Houda Zouaoui


ORCID: https://orcid.org/0000-0003-3224-8273
Laboratory of Applied Mathematics, Mathematics Department, Biskra University, Algeria
E-mail: [email protected], [email protected]

Abstract
Recently, extropy has gained interest by the academic researchers. This work explores the features
of parametric and non-parametric estimators based on upper record values under the a two-
parameter Weibull distribution. We apply the Markov Chain Monte Carlo (MCMC) method to
provide a Bayesian estimator. A considerable number of theoretical properties of the procedures are
determined.
Keywords: Extropy. Upper record. Bayesian estimation

1. Introduction
In reliability theory, the concept of uncertainty plays a pivotal role in our understanding of
data and decision-making processes. Uncertainty, often quantified by entropy, see Shannon (1984),
measures the degree to which outcomes are unpredictable or unknown. In the other hand, an
alternative measurement for uncertainty was suggested by Lad et al. (2015) named extropy. offering
a different perspective on the information contained within a system. For a continuous random
variable (rv) 𝑋 with with probability density function (pdf) 𝑓 (𝑥) and cumulative distribution
function (cdf) 𝐹(𝑥), the extropy is defined by
1
𝐽(𝑋) = − 2 ∫ 𝑓 2 (𝑥)𝑑𝑥 (1)

These measurements are highly regarded in the framework of order and record statistics. In
this article, we adopt the concept of record values which is important in a variety of practical
applications, ranging from reliability engineering to environmental research. Understanding the
behavior and characteristics of these records can provide useful information about the underlying
distribution and its properties. readers may refer to works such as Arnold et al. (1998).
Let 𝑋1 , … , 𝑋𝑛 be a sequence of independent and identically distributed (iid) rv with pdf 𝑓 (𝑥)
and cdf 𝐹(𝑥) . We say an upper record value is an observation 𝑋𝑗 that outperforms all prior
observations in a series of random variables 𝑋𝑗 > 𝑋𝑖 , Ɐ 𝑖 < 𝑗 (Ahsanullah, 1995).
Assume 𝐷𝑛 = 𝑈1 , … , 𝑈𝑛 be the first n upper record values. The joint density function for 𝐷𝑛
is as follows:

𝑓(𝑢 )
𝑓(𝑢1 , … , 𝑢𝑛 ) = 𝑓(𝑢𝑛 ) ∏𝑛−1 𝑖
𝑖=1 1−𝑓(𝑢 ) (2)
𝑖

The estimation of extropy has recently attracted attention from several researchers. We may
refer to Qiu et al. (2017), who provided some estimators for extropy with applications in testing
uniformity. On the other hand, the problem of estimating the extropy based on complete sample has
been considered recently by some authors, see for example, Qiu et al. (2018). Characterization
results are also given. Additionally, Zouaoui et al. (2022) investigated the Evaluation of the
1
The Journal of Engineering and Exact Sciences – jCEC

uncertainty increments for the records and they also provided related characterization results. For
more statistical inference see Jose et al. (2019) and Ahmed et al. (2023).
This paper aims to explore the extropy of record values within the context of the Weibull
distribution. We seek to gain deeper insights into the estimation of extropy for the Weibull
distribution where it is noted for its adaptability and application, it is commonly used to describe
life-cycle data and dependability problems, see Baratpour et al. (2007), Chacko et al. (2021) and
Murthy et al. (2004).
The pdf of Weibull distribution is defined respectively as follows:
𝜆
𝑓(𝑥) = 𝑏𝜆𝑥 𝜆−1 𝑒 −𝑏𝑥 (3)

Understanding the extropy associated with Weibull-distributed records can enhance our ability
to model and predict phenomena in various fields, from engineering to environmental studies.

2. Maximum likelihood estimation


The MLE method can be used to explore the range of possible distributions and parameters. it
seeks to identify model parameter values that maximize the probability function over the parameter
space. The maximum likelihood method is a widely used statistical inference technique that may be
used to a variety of distributions and models. The Fisher information matrix (FIM) can be used to
calculate confidence intervals (CIs) due to its asymptotic features.

1.1 Likelihood Equations


Let 𝐷𝑛 = 𝑈1 , … , 𝑈𝑛 be the first n upper record values from Weibull distribution 𝑊(𝑏, 𝜆).
from (1), the likelihood function, say, 𝐿(𝑏, 𝜆; 𝑢), can be presented as
𝜆
𝐿(𝑏, 𝜆; 𝑢) = 𝑒 −𝑏𝑢𝑛 ∏𝑛𝑖=1 𝑏𝜆 𝑢𝑖𝜆−1 (4)

The log-likelihood function is given by:


𝑛

𝑙(𝑏, 𝜆; 𝑢) = −𝑏𝑢𝑛𝜆 + ∑ log 𝑏 + log 𝜆 + (𝜆 − 1) log 𝑢𝑖


𝑖=1
= −𝑏𝑢𝑛𝜆 + 𝑛 log 𝑏 + 𝑛 log 𝜆 + ∑𝑛𝑖=1(𝜆 − 1) log 𝑢𝑖 (5)

The partial derivatives of 𝑙(𝑏, 𝜆; 𝑢) for 𝑏, 𝜆 is derived respectively as

𝜕𝑙(𝑏,𝜆;𝑢) 𝑛
= −𝑢𝑛𝜆 + 𝑏 (6)
𝜕𝑏
𝜕𝑙(𝑏,𝜆;𝑢) 𝑛
= −𝑏𝜆𝑢𝑛𝜆−1 + 𝜆 + ∑𝑛𝑖=1 log 𝑢𝑖 (7)
𝜕𝜆

Now, to get the MLEs of 𝑏 and 𝜆 we set the equations (6) and (7) to zero. Therefore, the MLE
of 𝑏 is given by

𝑛
𝑏̂𝑀𝐿 = 𝜆 ̂ (8)
𝑢𝑛

For 𝜆̂𝑀𝐿 cannot be derived with an explicit form. Therefore, we need to solve the nonlinear
equation (7) numerically. One of the most used methods is the Newton–Raphson (N–R) method.
Using the invariant property, the MLE of 𝐽𝑋 is :

2
The Journal of Engineering and Exact Sciences – jCEC

−2𝜆𝑀𝐿 +3̂ ̂𝑀𝐿 2


𝜆
𝐽𝑋 = −𝑏̂𝑀𝐿 ̂ 2 𝛤(2𝜆̂𝑀𝐿 − 1) (9)
2𝜆𝑀𝐿

1.2 Asymptotic Confidence Intervals for MLEs


For more accuracy of MLEs, we use the asymptotic variance of MLE to determine the ACIs
of 𝑏 and 𝜆. Let 𝐼(𝛷) be the fisher information matrix where 𝛷 = (𝑏, 𝜆 ), the FIM can be given as
follows

𝜕2 𝑙(𝑏,𝜆;𝑢) 𝜕2 𝑙(𝑏,𝜆;𝑢)
− −
𝜕𝑏 2 𝜕𝑏𝜕𝜆
𝐼(𝛷) = [ 𝜕2 𝑙(𝑏,𝜆;𝑢) 𝜕2 𝑙(𝑏,𝜆;𝑢)
] (10)
− −
𝜕𝑏𝜕𝜆 𝜕𝜆2

Thus,
𝑛
2
𝜆𝑢𝑛𝜆−1
𝐼(𝛷) = [ 𝑏𝜆−1 𝑛] (11)
𝜆𝑢𝑛 𝑏𝑢𝑛𝜆−1 + 𝑏𝜆(𝜆 − 1)𝑢𝑛𝜆−2 + 𝜆2

To find the 𝑉𝑎𝑟(𝑏̂𝑀𝐿 ) and 𝑉𝑎𝑟(𝜆̂𝑀𝐿 ), we should calculate the inverse of FIM of the MLEs
under the asymptotic property. Thus:

𝑛 ̂ −1
𝜆
2 𝜆̂𝑀𝐿 𝑢𝑛𝑀𝐿
𝑏̂𝑀𝐿
̂) =
𝐼(𝛷 [ ] (12)
̂ −1
𝜆 ̂ −1
𝜆 ̂ −2
𝜆 𝑛
𝜆̂𝑀𝐿 𝑢𝑛𝑀𝐿 𝑏̂𝑀𝐿 𝑢𝑛𝑀𝐿 + 𝑏̂𝑀𝐿 𝜆̂𝑀𝐿 (𝜆̂𝑀𝐿 − 1)𝑢𝑛𝑀𝐿 + ̂ 2
𝜆𝑀𝐿

̂ is the estimate of 𝛷. Thus


Where, 𝛷

̂ −1
𝜆 ̂ −2
𝜆 𝑛 ̂ −1
𝜆
𝑏̂𝑀𝐿 𝑢𝑛𝑀𝐿 + 𝑏̂𝑀𝐿 𝜆̂𝑀𝐿 (𝜆̂𝑀𝐿 − 1)𝑢𝑛𝑀𝐿 + 2 −𝜆̂𝑀𝐿 𝑢𝑛𝑀𝐿
̂ )−1 =
1 𝜆̂𝑀𝐿
𝐼(𝛷 ̂ −1 𝑛
̂ ))
det (𝐼(𝛷 𝜆
−𝜆̂𝑀𝐿 𝑢𝑛𝑀𝐿 2
[ 𝑏̂𝑀𝐿 ]
𝑉𝑎𝑟(𝑏̂𝑀𝐿 ) 𝐶𝑂𝑉(𝑏̂𝑀𝐿 , 𝜆̂𝑀𝐿 )
=[ ] (13)
𝐶𝑂𝑉(𝑏̂𝑀𝐿 , 𝜆̂𝑀𝐿 ) 𝑉𝑎𝑟(𝜆̂𝑀𝐿 )

Where

̂ )) = 𝑛 (𝑢𝑛𝜆̂𝑀𝐿−1 + 𝜆̂𝑀𝐿 (𝜆̂𝑀𝐿 − 1)𝑢𝑛𝜆̂𝑀𝐿−2 ) + (


det (𝐼(𝛷
𝑛 2 ̂ −1
𝜆
− (𝜆̂𝑀𝐿 𝑢𝑛𝑀𝐿 )2
𝑏̂ 𝑀𝐿 𝑏̂ ̂ )
𝑀𝐿 𝜆𝑀𝐿
(14)

The (1 − 𝜀)100% confidence intervals for 𝑏̂𝑀𝐿 , 𝜆̂𝑀𝐿 are given as


𝑏̂𝑀𝐿 ± 𝑍𝜀 √𝑉𝑎𝑟(𝑏̂𝑀𝐿 ) , 𝜆̂𝑀𝐿 ± 𝑍𝜀 √𝑉𝑎𝑟(𝜆̂𝑀𝐿 ) (15)
2 2

respectively, where 𝑍𝜀 is 𝑍𝜀 100% the lower percentile of standard normal distribution.


2 2

−2𝜆𝑀𝐿 +3 ̂ ̂𝑀𝐿 2
𝜆
𝐽𝑋 = −𝑏̂𝑀𝐿 ̂ 2 𝛤(2𝜆̂𝑀𝐿 − 1) (16)
2𝜆𝑀𝐿

3
The Journal of Engineering and Exact Sciences – jCEC

To attain the 100(1 − 𝑍𝜀 )% two-sided asymptotic approximation CIs for 𝐽𝑋 . The delta method
can be used to approximate the variances of extropy. Let:

𝜕𝐽𝑋 𝜕𝐽𝑋
𝐷𝐽𝑋 = ( ) (17)
𝜕𝑏 𝜕𝜆 𝑏=𝑏̂𝑀𝐿 , 𝜆=𝜆
̂𝑀𝐿

Then we find the estimated variance of extropy as follows

̂ )−1 𝐷𝐽 𝑇
𝑉𝑎𝑟(𝐽̂𝑋 ) = 𝐷𝐽𝑋 𝐼(𝛷 (18)
𝑋

Therefore

[𝐽̂𝑋 − 𝑍𝜀 √𝑉𝑎𝑟(𝐽̂𝑋 ) , 𝐽̂𝑋 + 𝑍𝜀 √𝑉𝑎𝑟(𝐽̂𝑋 )] (19)


2 2

3. Bayes inference

In this section, we concentrate on the main objective which is the Bayesian estimation to
estimate the parameters 𝑏 and 𝜆 and also 𝐽𝑋 . For this method we use the squared error and LINEX
loss functions, it can be defined respectively as follows:

̂ ) = (𝛷 − 𝛷
𝐿1 (𝛷, 𝛷 ̂ ) = e(𝜂(𝛷−𝛷̂)) − 𝜂(𝛷 − 𝛷
̂ )2 , 𝐿2 (𝛷, 𝛷 ̂) − 1 (20)

Let’s choose the prior distribution of 𝑏 and 𝜆. We propose the parameters independently follow
a Gamma distribution (𝑏~ 𝐺𝑎𝑚𝑚𝑎(α, 𝛽) and 𝜆~ 𝐺𝑎𝑚𝑚𝑎(γ, 𝜏), where α, 𝛽, γ, 𝑎𝑛𝑑 𝜏 are positive
real constants). Thus, the joint prior distribution

𝑃(𝑏, 𝜆) 𝛼 𝜆γ−1 𝑏 α−1 𝑒 −𝜆𝜏−𝑏𝛽 ; α, 𝛽, γ, 𝜏 > 0 (21)

Hence, The joint posterior distribution

𝐿(𝑏,𝜆;𝑢)𝑃(𝑏,𝜆)
𝑃∗ (𝑏, 𝜆|𝑢) =
∬ 𝐿(𝑏,𝜆;𝑢)𝑃(𝑏,𝜆) 𝑑𝑏 𝑑𝜆
𝜆
𝜆γ−1 𝑏 α−1 𝑒 −𝜆𝜏−𝑏𝛽−𝑏𝑢𝑛 ∏𝑛 𝜆−1
𝑖=1 𝑏𝜆 𝑢𝑖
= 𝜆 (22)
∬ 𝑒 −𝑏𝑢𝑛 ∏𝑛 𝜆−1 𝜆γ−1 𝑏 α−1 𝑒 −𝜆𝜏−𝑏𝛽 𝑑𝑏 𝑑𝜆
𝑖=1 𝑏𝜆 𝑢𝑖

The joint posterior density can be written as:


𝑏 𝑛 𝜆
𝑃∗ (𝑏, 𝜆|𝑢) 𝛼 𝜆2𝑛+γ−1 𝑏 2𝑛+α−1 ∏𝑛𝑖=1 𝑢𝑖𝜆−1 𝑒 −𝜆(𝜏−𝑢𝑛)−𝑏(𝛽−∑𝑖=1 log 𝑢𝑖 −𝑢𝑛) (23)

2.1 Markov Chain Monte Carlo


The Bayes estimates for determining the posterior mean for the parameters are difficult to get
unless numerical approximation methods are used. There are numerous approximation approaches
in the literature for dealing with this type of situation. We consider the (MCMC) approximation
approach and the Gibbs sampling algorithm which are popular Bayesian estimating techniques that
rely on marginal posterior distributions for sampling. Readers may refer to Pradhan et al. (2011),
Chib et al. (1995) and Al-Labadi et al. (2020).
The full conditional posterior distributions for 𝑏 and 𝜆 s are as follows:

4
The Journal of Engineering and Exact Sciences – jCEC

𝑏 𝜆
𝑃∗ (𝜆|𝑏, 𝑢) 𝛼 𝜆2𝑛+γ−1 ∏𝑛𝑖=1 𝑢𝑖𝜆−1 𝑒 −𝜆(𝜏−𝑢𝑛)+𝑏𝑢𝑛 (24)
𝑏 𝑛 𝜆
𝑃∗ (𝑏|𝜆, 𝑢) 𝛼 𝑏 2𝑛+α−1 𝑒 𝜆𝑢𝑛 −𝑏(𝛽−∑𝑖=1 log 𝑢𝑖 −𝑢𝑛) (25)

Thus, we must use the Metropolis–Hastings (M–H) algorithm to generate the unknown
parameters because the densities in Equations (24,25) cannot be written as known densities. As a
result, it is impossible to generate 𝑏 and 𝜆 directly from these densities using conventional methods;
for more information. The M–H algorithm aims to minimize rejection rates as much as possible. To
find the (BEs) and generate credible intervals for the required parameters, the M–H algorithm relies
on selecting the normal distribution. The Gibbs technique, which can be summarized as the
following algorithm, is as follows:

Step 1: Put the ML estimators of 𝑏 and 𝜆 as initial values 𝑏0 and 𝜆0 .


Step 2: Let 𝑇 = 1, … . , 𝑁 be the observations generated from the conditional posterior
distributions for 𝑏 and 𝜆 (24) and (25) respectively.
Step 3: Repeat Steps 2 𝑀 times to obtain MCMC samples (𝑏1 , 𝜆1 ), … , (𝑏 𝑀 , 𝜆𝑀 ) where 𝑀
The total amount of cycles needed.).
Step 4: The Bayes estimator of extropy given in (9) under SE and LINEX are presented as
follow
1 𝑡 𝑡 2
𝑡 −2𝜆 +3 (𝜆 )
𝐽̂
𝑆𝐸 = ∑ 𝑀
𝑡=𝑚+1 −𝑏 𝑡
𝑡 2 𝛤(2𝜆 − 1) (26)
𝑀−𝑚 2(𝜆 )

−2𝜆𝑡 +3 (𝜆𝑡 )2
−1 1 𝜂𝑏 𝑡 𝑡
𝑡 2 𝛤(2𝜆 −1)
𝐽̂ 𝑀
𝐿𝑋 = 𝜂 𝑙𝑜𝑔 [𝑀−𝑚 ∑𝑡=𝑚+1 𝑒 2(𝜆 ) ] (27)

where m is the first iterations as burn in period.

4. Simulation
A simulation study was carried out to assess the performance of the estimating techniques
created in Sections 2 and 3. First, we get the extropy of k-records for unknown parameters 𝑏, 𝜆 at
equation (9) by calculating the MLE 𝑏̂𝑀𝐿 and 𝜆̂𝑀𝐿 which we were able to obtain the bias of MLEs
for various values k . Now, assuming the model parameters α = 2, 𝛽 = 2 and γ = 2, 𝜏 = 2, 500
observations. Based on these data sets, the maximum likelihood estimates (MLEs) and Bayes
estimates of the parameters were obtained. For Bayesian estimation, we generated 10,000
realizations of the Markov chains using the Gibbs and Metropolis–Hastings algorithms. The results
of the simulation study are summarized in Table 1. We observe that the bias of all estimators
decrease as the sample size n increases and the bias of bayes estimation under the SE loss function
is smaller than the MLE.

5
The Journal of Engineering and Exact Sciences – jCEC

Table 1 – The bias of MLE and BE for Weibull distribution


MLE Bayes
k Shape Scale Extropy SEL LX: h=1
|Bias| |Bias| |Bias|
6 1.5 2 0.078998995 0.006430693 0.006030694 0.012340693
7 1.5 2 0.072187881 0.013241806 0.012141809 0.019671806
8 1.5 2 0.077499109 0.007930578 0.007820571 0.009830578
9 1.5 2 0.067900459 0.017529228 0.017419221 0.018929228
6 1.5 2.5 0.078998995 0.047277892 0.04417789 0.051277892
7 1.5 2.5 0.072187881 0.054089006 0.051289009 0.062289006
8 1.5 2.5 0.077499109 0.048777778 0.042377772 0.057877778
9 1.5 2.5 0.06790046 0.058376427 0.054276421 0.061276427
6 1.6 2 0.118035454 0.016182274 0.011682279 0.019682274
7 1.6 2 0.106579012 0.027638716 0.022238717 0.027638716
8 1.6 2 0.115357326 0.018860402 0.013160403 0.018860402
9 1.6 2 0.09954352 0.034674208 0.031674201 0.034674208
6 1.6 2.5 0.118035454 0.093583058 0.091783051 0.093583058
7 1.6 2.5 0.106579012 0.1050395 0.102234394 0.196620395
8 1.6 2.5 0.115357325 0.096261187 0.091161189 0.126261187
9 1.6 2.5 0.09954352 0.112074992 0.100074993 0.187874992

References
Ahmed, E. A., El-Morshedy, M., Al-Essa, L. A., & Eliwa, M. S. (2023). Statistical inference on the
entropy measures of gamma distribution under progressive censoring: EM and MCMC
algorithms. Mathematics, 11(2298). https://doi.org/10.3390/math11102298
Ahsanullah, M. (1995). Record statistics. Nova Science Publishers.
AL-Labadi, L., & Berry, S. (2020). Bayesian estimation of extropy and goodness of fit tests. Journal
of Applied Statistics, 49(2), 357–370. https://doi.org/10.1080/02664763.2019.1631004
Arnold, B. C., Balakrishnan, N., & Nagaraja, H. N. (1998). Records. Wiley.
Baratpour, S., Ahmadi, J., & Arghami, N. R. (2007). Estimating parameters for Weibull distribution
using record values. Statistical Papers, 48(2), 197–213. https://doi.org/10.1007/s00362-006-
0303-5
Chacko, M., & Asha, P. S. (2021). Estimation of entropy for Weibull distribution based on record
values. Journal of Statistical Theory and Applications, 20(2), 279-288.
https://doi.org/10.2991/jsta.d.210318.001
Chib, S., & Greenberg, E. (1995). Understanding the Metropolis-Hastings algorithm. The American
Statistician, 49(4), 327-335. https://doi.org/10.1080/00031305.1995.10476177
Jose, J.; Abdul Sathar, E.I. Residual extropy of k-record values. Statistics and Probability Letters,
v. 146, n. 146, p. 1-6, 2019. https://doi.org/10.1016/j.spl.2018.10.019
Lad, F., Sanfilippo, G., & Agro, G. (2015). Extropy: complementary dual of entropy. Statistical
Science, 30(1), 40–58. https://doi.org/10.1214/14-STS499
Murthy, D. N. P., Xie, M., & Jiang, R. (2004). Weibull models. John Wiley & Sons.
Pradhan, B., & Kundu, D. (2011). Bayes estimation and prediction of the two-parameter gamma
distribution. Journal of Statistical Simulation, 81(10), 1187–1198.
https://doi.org/10.1080/00949655.2010.498562
Qiu, G. The extropy of order statistics and record values. Statistics and Probability Letters, v. 120,
n. 120, p. 52-60, 2017. https://doi.org/10.1016/j.spl.2016.08.014

6
The Journal of Engineering and Exact Sciences – jCEC

Qiu, G.; Jia, K. The residual extropy of order statistics. Statistics and Probability Letters, v. 133, n.
133, p. 15-22, 2018. https://doi.org/10.1016/j.spl.2017.10.017
Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal,
27(4), 379–432, 623-656. https://doi.org/10.1002/j.1538-7305.1948.tb00917.x
Zouaoui, N. E.; Raqab, M. Z. Evaluation of the Uncertainty Increments for the Records and Related
Characterizations. Journal of Applied Probability and Statistics, v. 17, n. 17, p. 21-34, 2022.

You might also like