Next Article in Journal
Explicit Formula of Koszul–Vinberg Characteristic Functions for a Wide Class of Regular Convex Cones
Next Article in Special Issue
Intra-Day Trading System Design Based on the Integrated Model of Wavelet De-Noise and Genetic Programming
Previous Article in Journal
The Geometry of Signal Detection with Applications to Radar Signal Processing
Previous Article in Special Issue
Entropy for the Quantized Field in the Atom-Field Interaction: Initial Thermal Distribution
 
 
Correction published on 14 August 2020, see Entropy 2020, 22(8), 892.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus)

by
Javier E. Contreras-Reyes
1,2,* and
Daniel Devia Cortés
1
1
Instituto de Fomento Pesquero (IFOP), Blanco 839, Valparaíso 2361827, Chile
2
Departamento de Matemática, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(11), 382; https://doi.org/10.3390/e18110382
Submission received: 3 August 2016 / Revised: 4 October 2016 / Accepted: 21 October 2016 / Published: 26 October 2016
(This article belongs to the Collection Advances in Applied Statistical Mechanics)

Abstract

:
Mixture models are in high demand for machine-learning analysis due to their computational tractability, and because they serve as a good approximation for continuous densities. Predominantly, entropy applications have been developed in the context of a mixture of normal densities. In this paper, we consider a novel class of skew-normal mixture models, whose components capture skewness due to their flexibility. We find upper and lower bounds for Shannon and Rényi entropies for this model. Using such a pair of bounds, a confidence interval for the approximate entropy value can be calculated. Simulation studies are then applied to a swordfish (Xiphias gladius Linnaeus) length dataset.

1. Introduction

Mixture models are in high demand for machine-learning analysis, due to their computational tractability and for offering a good approximation for continuous densities [1]. In addition, mixture models are an important statistical tool for many applications in clustering [2,3], discriminant analysis [4], image processing and satellite imaging [5,6]. Celeux and Soromenho [2] consider a Maximum Likelihood (ML)–based entropy criterion to estimate the number of clusters arising from a mixture model, and compare it with the classical Akaike (AIC) and Bayesian (BIC) information criteria. Carreira-Perpiñán [6] deal with the problem of finding all modes of multi-dimensional data, assuming a mixture of normal densities. Specifically, he uses the gradient as a mode locator and for controlling the significance modes thus obtained, by measuring the sparseness of the densities mixture via the entropy. However, so far, no analytical expressions, which consider bounds of Shannon entropy for the normal mixture entropy, exist. Similarly, in the case of Kullback–Leibler divergence, an analytic evaluation of the differential entropy is also impossible. Thus, approximate calculations become inevitable [7,8,9]. Jenssen et al. [3] use the Rényi entropy [10] as a similarity measure between clusters. They consider the Parzen window density estimation for differential Rényi entropy clustering to identify the worst cluster and subsequently reduce the overall number of clusters by one.
Predominantly, the entropy applications mentioned above have been developed in the normal context, but several results of both Shannon and Rényi entropies for various multivariate distributions (see, e.g., [11,12]) actually exist. Here, we consider the novel class of finite mixture of multivariate skew-normal mixture (FMSN) models [13]. This class provides some advantages over the normal mixtures. For instance, the normal components allow an arbitrarily close modeling of any distribution by increasing the number of components, and, in the context of supervised learning, groups of observations represented by asymmetrically distributed data can lead to wrong classification [14]. The components of skew-normal mixture models, however, capture skewness due to their flexibility.
Several examples, including references of this fact, can be found in [14,15,16,17]. Frühwirth-Schnatter and Pyne [14] studied high-dimensional flow cytometric data for Alzheimer’s disease treatment. A trivariate diffuse large B-cell lymphoma dataset is studied by [15] to cluster the cells into three groups. Lee and McLachlan [16] consider several applications, among them: (i) the clustering of a flow cytometric dataset derived from a hematopoietic stem cell transplant (HSCT) experiment, where each sample was stained with four fluorescent markers; (ii) the variables body mass index, lean body mass, and body fat percentage of a real dataset, concerning biomedical measurements for Australian athletes; (iii) a portfolio of three shares listed on the Australian Stock exchange in a value-at-risk (VaR) analysis; (iv) geometric measurements taken from X-ray images of wheat kernels in a discriminant analysis application; and (v) an image segmentation analysis. In addition, Lin et al. [17] studied the distribution of peripheral blood samples in transplanted organs.
In this paper, we calculate the bounds for Shannon and Rényi entropies for the skew-normal mixture model. The maximum entropy theorem and Jensen’s inequality are considered for the Shannon entropy case. Using such a pair of bounds, a confidence interval for the approximate entropy value can be calculated. Simulation studies are then applied to a swordfish length dataset.
The paper is organized as follows. Section 2 presents definitions of multivariate skew-normal (SN) and FMSN distributions, as well as previous results of Shannon and Rényi entropies for SN distributions. Section 3 presents the main results: the computation of upper and lower bounds of these information measures for FMSN distributions. Section 4 reports numerical results of some simulated examples and a real-world application of swordfish data. This paper ends with a discussion in Section 5. Some proofs are presented in Appendix A.

2. Preliminary Material

2.1. Skew-Normal Distribution

The multivariate skew-normal distribution has been introduced in [18]. This class of flexible distributions regulates the skewness, allowing for a continuous variation from normality to skew-normality. Below, we use a slight variant of the original definition. Consider a random vector Z R d with skew-normal distribution, location vector ξ R d , dispersion matrix Ω R d × d and shape/skewness parameter η R d , denoted by Z S N d ( θ ) , θ = ( ξ , Ω , η ) , if its probability density function (pdf) is
f ( z ; θ ) = 2 ϕ d ( z ; ξ , Ω ) Φ 1 ( η ( z ξ ) ; 0 , 1 ) ,
where ϕ d ( z ; ξ , Ω ) is the pdf of the d-variate N d ( ξ , Ω ) distribution, and Φ 1 ( η ( z ξ ) ; 0 , 1 ) the univariate N 1 ( 0 , 1 ) cumulative distribution function. where | Ω | ( | Ω | 0 ) represents the determinant of Ω. The stochastic representation of Z is given by
Z = d ξ + δ | U 0 | + U ,
where U 0 N ( 0 , 1 ) and U N d ( 0 , Ω δ δ ) , δ = Ω η / 1 + η Ω η , | δ | 1 , which are independent. | U 0 | represents the absolute value of U 0 , i.e., it is half-normal distributed. From Equation (2), Azzalini and Capitanio [19] derived the mean vector and covariance matrix of Z :
E [ Z ] = ξ + 2 π δ ,
V a r [ Z ] = Ω 2 π δ δ .

2.2. Finite Mixtures of Skew-Normal Distributions

Let us consider the definition of [14] for finite mixtures of skew-normal distributions. The pdf of an m-component mixture model with parameter vector set θ ˜ = ( ξ ˜ , Ω ˜ , η ˜ ) : ξ ˜ = ( ξ 1 , , ξ m ) a set of m location vector parameters, Ω ˜ = ( Ω 1 , , Ω m ) a set of m dispersion matrices, η ˜ = ( η 1 , , η m ) a set of shape vector parameters; and with m mixing weights, π = ( π 1 , , π m ) is
p ( y ; θ ˜ , π ) = i = 1 m π i f ( y ; θ i ) ,
where π i 0 , i = 1 m π i = 1 , and f ( y ; θ i ) are defined as in Equation (1) for a known θ i = ( ξ i , Ω i , η i ) , i = 1 , , m . Additional details about the log-likelihood function of an FMSN model are described in [13]. Let S = ( S 1 , , S n ) be a set of n latent allocations for the distribution of observations y , p ( y ; θ ˜ , π ) = j = 1 n p ( y ; θ ˜ , S j ) , where Pr ( S j = i | π ) = π i . Then, an equivalent stochastic representation to each j-th component density as in (2) is obtained:
Y j | ( S j = i ) = d ξ i + δ i | U 0 j | + Ω i δ i δ i U j , j = 1 , , n ,
where U 0 j and U j are mutually independent and standardized one- and d-dimensional normal distributed, respectively; and δ i = Ω i η i / 1 + η i Ω i η i , i = 1 , , m . Considering the stochastic representation (6), and the first and second moments of the i-th component of Y , Equations (3) and (4), respectively; we obtain the first two moments of Y :
E [ Y ] = i = 1 m π i ξ i + 2 π δ i ,
V a r [ Y ] = i = 1 m π i Ω i 2 π δ i δ i + μ i μ i ,
with μ i = ξ i + 2 π δ i E [ Y ] , i = 1 , , m (see, e.g., [6]).

2.3. Entropies

Let X be a random vector defined in R d and for all values of parameter θ Θ , where Θ is an open subset of the real line and f ( x ; θ ) is the pdf of x . Let us consider the αth-order Rényi entropy [10] on variable x :
R α [ X ; θ ] = 1 1 α ln R d f ( x ; θ ) α d x , 0 α , α 1 ,
and the Shannon entropy is obtained by the limit
H [ X ; θ ] = lim α 1 R α [ X ; θ ] = E [ ln f ( x ; θ ) ] = R d f ( x ; θ ) ln f ( x ; θ ) d x ,
by applying l’Hôpital’s rule to R α [ X ; θ ] with respect to α. Hereafter, we will refer to Equation (10) as the expected information of f ( x ; θ ) (for additional properties of the Shannon entropy, see [20]) and, E [ g ( X ) ] denotes the expected information in X of the random function g ( x ) = ln f ( x ; θ ) . An important property of Rényi entropy is R α 1 [ X ; θ ] R α 2 [ X ; θ ] , given that α 1 α 2 (see, e.g., [11]). In addition, the Rényi entropy represents a generalization of the Shannon entropy and could be used to derive a continuous family of information measures.
According to [21], the negentropy of a living system is the entropy it exports to keep its own entropy low, and thus it lies at the intersection of entropy and life. In our case, the negentropy component of the Rényi and Shannon entropies measures the dispersion of the distribution of Z from normal distribution [11]. The following Proposition of [11] presents these differences in terms of the deviation matrix and shape/skewness parameter.
Proposition 1.
Let Z S N d ( θ ) , Z N N d ( ξ , Ω ) , and η ¯ = η Ω η . Then,
(i)
the Shannon entropy of Z is
H [ Z ; θ ] = H [ Z N ; θ N ] N [ Z ; θ ] ,
where
H [ Z N ; θ N ] = 1 2 ln [ ( 2 π e ) d | Ω | ] ( normal Shannon entropy ) , N [ Z ; θ ] = E ln { 2 Φ 1 ( η ¯ W ) } ( Shannon negentropy ) ,
with W S N 1 ( η ¯ ) .
(ii)
the αth-Rényi entropy of Z , α = 2 , 3 , . . . , is
R α [ Z ; θ ] = R α [ Z N ; θ N ] N α [ Z ; θ ] ,
where
R α [ Z N ; θ N ] = 1 2 ln [ ( 2 π ) d | Ω | ] + d ln α 2 ( α 1 ) ( normal Rényi entropy ) , N α [ Z ; θ ] = 1 α 1 ln 2 α Φ α + 1 ( 0 ; 0 , Ω ¯ ) Φ 1 ( 0 ; 0 , ω ) ( Rényi negentropy ) ,
with θ N = ( ξ , Ω ) , Ω ¯ = I α + 1 + η ¯ 2 D ¯ D ¯ , D ¯ = ( 1 α , η ¯ ) , 1 α is the α-dimensional vector of ones, and ω = 1 + η ¯ 4 .
From Proposition 1, we observe that the Shannon negentropies do not depend on the dispersion matrix Ω, but only on the shape parameter vector η . However, Rényi entropy depends on Ω and η parameters. Contreras-Reyes [11] show in Proposition 1 (ii) that the skew-normal Rényi entropy is the normal Rényi entropy [12], as η 0 .

3. Results

In this section, we present practical results of upper and lower bounds of Shannon and Rényi entropies for FMSN distributions in Section 3.1 and Section 3.2, respectively, which ought to be considered in numerical simulations and real-world application (Section 4).

3.1. Shannon Entropy Bounds

As in the normal case, the Shannon entropy of mixture of skew-normal distributions does not have a closed form. However, the following proposition presents some lower and upper bounds as an approximation of the entropy of finite mixture of skew-normal densities.
Proposition 2.
Consider the FMSN density of ( Y ; θ ˜ , π ) defined in Equation (5). Then, the following inequalities are accomplished:
(i)
A l o w e r H [ Y ; θ ˜ ] A u p p e r ,
(ii)
B l o w e r H [ Y ; θ ˜ ] B u p p e r ,
where
A u p p e r = 1 2 ln { ( 2 π e ) d | Σ | } , A l o w e r = A u p p e r i = 1 m π i N [ Y ; θ i ] , B u p p e r = A l o w e r i = 1 m π i ln π i , B l o w e r = i = 1 m π i ln s = 1 m π s f ( t ; η i ) f ( t ; η s ) d t ,
with N [ Y ; θ i ] = E ln { 2 Φ 1 ( η ˜ i W i ) } = f ( w i ; η ˜ i ) ln { 2 Φ 1 ( η ˜ i w i ) } d w i , W i S N 1 ( η ˜ i ) , η ˜ i = η i Ω i η i , and Σ = Var [ Y ] is defined by Equation (8).
For the case m = 1 , Contreras-Reyes and Arellano-Valle [22] consider the upper bound of the property (i) of Proposition 2 to approximate the Shannon entropy of an SN distribution using the property (ii) of Proposition 2. In this Proposition 2(ii), the left side includes an integral related to a product of two skew-normal densities. When i = s , these integrals correspond to an L 2 -norm and are represented by the quadratic Rényi entropy [11]. For the case i s , the integral does not have explicit form and requires numerical methods to be computed. Moreover, the right side corresponds to the sum of the entropy of a multinomial density with parameters π 1 , , π m and a second term based on the weights and shape parameters of the skew-normal density. Other refinements can be found in [9]. These are suitable for cases of several components (for example, m 5 ), i.e., a skew-normal density consisting of several and well separated clusters.
A lower bound can be found for H [ Y ; θ ˜ ] using the L 2 -norm of an FMSN density and Jensen’s inequality [20]:
H [ Y ; θ ˜ ] 2 ln p ( y ; θ ˜ , π ) 2 = ln R d p ( y ; θ ˜ , π ) 2 d y .
Considering Equation (9), Equation (11) and Proposition 2(i)–(ii), we obtain the additional inequality R 2 [ Y ; θ ˜ ] H [ Y ; θ ˜ ] and
B l o w e r A l o w e r B u p p e r A u p p e r .
The next section provides upper and lower bounds for Rényi entropy of FMSN random vectors.

3.2. Rényi Entropy Bounds

For the sake of simplicity, we define the following function in terms of Rényi entropy and α as
P α [ Y ; θ ˜ ] = e ( 1 α ) R α [ Y ; θ ˜ ] , 0 α , α 1 ,
for the calculus of the bounds of R d p ( y ; θ ˜ , π ) α d y . By applying the function ln ( · ) / ( 1 α ) to these integrals, we have the Rényi entropy of FMSN density.
As in the Shannon entropy case, the Rényi entropy can be upper bounded in terms of the dispersion matrix of the finite mixture random variable. Sánchez-Moreno et al. [23] derived a multidimensional upper bound using a variational approach,
R α [ Y ; θ ˜ ] d 2 ln | Σ | d + F d ( α ) ,
with Σ = V a r [ Y ] defined in Equation (8),
F d ( α ) = d 2 ln π b α 1 + 1 α 1 ln b 2 α + ln Γ α α 1 ln Γ b 2 ( α 1 ) , if   α 1 , d 2 ln π b 1 α + α α 1 ln b 2 α ln Γ α 1 α + ln Γ b 2 ( 1 α ) , if   α d d + 2 , 1 , H [ W 0 ; θ 0 ] , if   α = 1 ,
b = ( 2 + d ) α d , θ 0 = ( 0 , I d ) , and W 0 N d ( 0 , I d ) ( I d denotes the d-dimensional identity matrix). H [ W 0 ; θ 0 ] is obtained using property (i) of Proposition 1.
The right side of the inequality (13) is equivalent to the maximum Shannon entropy of Proposition 2. The first term depends on the dispersion matrix and the shape parameters, and the second only depends on the αth order and dimension d.
The next Lemma presents a useful result to compute the lower bound for Rényi entropy of an FMSN random vector Y in terms of each component.
Lemma 1.
Consider the FMSN density of ( Y ; θ , π ) defined in Equation (5). Then,
R d p ( y ; θ ˜ , π ) α d y P α [ Y ; θ m ] + i = 1 m 1 k = 1 i π k α P α [ Y ; θ i ] P α [ Y ; θ i + 1 ] ,
with 0 α , α 1 , and m 1 .

4. Numerical Results

4.1. Simulations

To study the behavior of the Shannon entropy bounds of Proposition 2 and the Rényi entropy bounds of Equation (13) and Lemma 1, some examples are simulated for the cases d = 1 , 2 and 3:
  • Example 1: d = 1 , m = 2 , π = ( 0.3 , 0.7 ) , ξ ˜ = ( 0.5 , 5 ) , Ω ˜ = ( 3.5 , 6 ) , and η ˜ = ( 0.5 , 3.5 ) ;
  • Example 2: [24] d = 1 , m = 3 , π = ( 0.5 , 0.2 , 0.3 ) , ξ ˜ = ( 2 , 20 , 35 ) , Ω ˜ = ( 9 , 16 , 9 ) , and η ˜ = ( 5 , 3 , 6 ) ;
  • Example 3: [24] d = 2 , m = 2 , π = ( 0.65 , 0.35 ) , ξ ˜ = 5 7 , 2 5 , Ω ˜ = 0.18 0.6 0.6 4 , 0.15 1.15 1.15 4 , and η ˜ = 0.69 0.64 , 4.3 2.7 ;
  • Example 4: [24] d = 2 , m = 3 , π = ( 0.25 , 0.5 , 0.25 ) , ξ ˜ = 0 0 , 5 5 , 2 8 , Ω ˜ = 3 1 1 3 , 2 1 1 2 , 0.15 1.15 1.15 40 , and η ˜ = 4 4 , 2 2 , 4.3 2.7 ;
  • Example 5: [2] d = 3 , m = 3 , π = ( 0.22 , 0.36 , 0.42 ) , ξ ˜ = 10 12 10 , 8.5 10.5 8.5 , 12 14 12 , Ω ˜ = 1 0 0 0 1 0 0 0 1 , 1 0 0 0 1 0 0 0 1 , 1 0 0 0 1 0 0 0 1 , and η ˜ = 4 0 1 , 2 1 3 , 4 2 2 ;
  • Example 6: [25] d = 3 , m = 4 , π = ( 0.125 , 0.19 , 0.135 , 0.55 ) , ξ ˜ = 420 360 425 , 160 570 200 , 320 540 260 , 530 80 450 , Ω ˜ = ( 9160 5580 7000 5580 12105 7160 7000 7160 7250 , 3870 1810 1770 1810 2900 1270 1770 1270 1320 , 1695 1190 2280 1190 2780 2010 2280 2010 3720 , 1590 590 15 590 2425 415 15 415 1870 ) , and η ˜ = 4.8 17 50 , 4 80 60 , 40 8 10 , 60 90 6 .
Figure 1 presents the examples mentioned in the settings above. Examples 1 and 2 are represented in histogram plots and examples 3 and 4 in contour plots, according to [24]. Examples 5 and 6 are represented in 3D plots, according to [25]. For all simulations, a sample of n = 500 generations is considered, and then fixed using the function smsn.mix from mixsmsn package, developed by [24] and implemented in an R environment [26]. Prates et al. [24] implemented routines for ML estimation via the Expectation Maximization (EM)-type algorithm in FMSN models (among several others).
For each example, Table 1 summarizes the four Shannon, as well as the Rényi entropy bounds for α = 2 , , 5 and m = 2 , , 6 . Shannon and Rényi entropies are compared with AIC and BIC criteria (see e.g. [27]), misclassification (MC) rates and consistency scores: normal skill score (NSS), Heidke skill score (HSS), and Hanssen–Kuipers (HK) (see e.g., [28]). All these indicators show an optimal performance of model fit if they are near 1; except MC, which ideally should be close to 0 (i.e., 100 ( 1 MC ) 100 %). For all examples, it is worth pointing out that these criteria are optimal for minimum AIC and BIC values (marked in gray). For examples 1–3, the misclassification rates are close to 0, and, for examples, 4–6 less than 0.46. This is because of the complexity of systems with high dimensions and parsimonious models fit (excess of parameters).
The information measures illustrated a similar effect. It can be seen that inequalities given in (12) are accomplished in the Shannon entropy case and the information increase for more parsimonious systems, where these 3D systems are characterized by a bigger set of components and dispersion matrices with large elements. With respect to Rényi entropies, the lower and upper bounds rather slowly increase with more components in examples 1–3, but rise faster with more components in examples 4–6. However, in examples 4–6, the lower and upper bounds are maximum for large α. Therefore, the Rényi information criterion is suitable for model fits with accurate classification of observations, i.e., incorrect performance of Rényi entropy is related to inadequate selection of components in complex systems. Additionally, the Rényi entropy of FMSN is localized between the upper and lower bounds, and an approximation should be given by the mean of these bounds.

4.2. Application

Estimation of age from growth of swordfish (Xiphias gladius Linnaeus) is an important factor in assessing stock trends [29]. The swordfish belongs to highly migratory pelagic species and has been observed in tropical to temperate waters (between 5 and 27 C), and in the western and eastern Pacific and Atlantic [30]. A more detailed description of this species can be found in [30].
Age and growth estimation of swordfish presents several difficulties [29]. For example, Cerna [30] describes age estimates obtained by cross sections of the second anal fin ray [31], which appears an expensive procedure for age estimation. Queele et al. [29] recall the inconclusive results obtained from the indirect validation test.
Roa-Ureta [32] maintain that since age is a latent variable, extracting growth information objectively is difficult. He estimates growth parameters using a likelihood function approach underlying a normal mixture model to be applied on the squat lobster length dataset, where age is unknown. The normal mixture model components are determined by AIC, which depends on the sample size and the number of parameters of the mixture.
This application is motivated by the determination of age–length relationship by sex group using information measures. This is presented in a framework format based on the following steps:
(a)
The matrix of data includes both length and weight ( d = 2 ) for each observation. Because it is necessary to avoid colinearity, the length–weight regression is computed to show non-linear relationship among both columns.
(b)
Given that the number of components is unknown (age is unknown), the FMSN parameters are estimated considering the two-dimensional matrix of the last step for several values m.
(c)
The number of components is determined by the bounds of information measures developed in Section 3 and then compared with AIC and BIC criteria.
(d)
The observed (measures obtained from the procedure of [30]) and estimated (by selected mixture model) ages of all observations are compared using a misclassification analysis.
Section 4.1 describes the dataset used and Section 4.2.2 and Section 4.2.3 describe the results for the steps mentioned.

4.2.1. Data and Software

The dataset used for evaluating the performance of our findings corresponds to a sample of, respectively, 486 and 507 swordfish male and female length observations. The samples were collected in the southeastern Pacific off Chile during 2011 and were obtained using the routine sampling program of the fishery conducted by the Instituto de Fomento Pesquero. All these fish were measured to the nearest centimeter and the range of observed lengths. The catch included fish between 120–257 cm for males, and 110–299 cm for females. As is described in Section 4.1, the FMSN parameter estimates were computed using the mixsmsn package.

4.2.2. Length–Weight Relationship

Following [33] (and references therein), we briefly describe the length–weight function. This function explains the increments in weight of species in terms of their length by the non-linear relationship
W ( x ) = α x β ,
where W ( x ) represents the observed weight at length x, α is the theoretical weight at length zero and β is the weight growth rate.
The model (14) is fitted to an empirical dataset, ( y i , x i ) R + × R + , i = 1 , . . . , n . This can be described in terms of multiplicative structure the errors, y i = W ( x i ) ε i , where ε i are non-negative random errors and their transformations are given by ε i = log ε i . Here, we consider the residuals ε i iid and normal distributed, denoted by N ( 0 , σ 2 ) , for a dispersion σ 2 parameter.
Figure 2 illustrates the linear regression fits of (14), for which we have a high value for the R 2 coefficient of determination for both sexes (Table 2). There exists a small number of observations of length classes larger than 210 and 250 cm for males and females, respectively, that tends to be isolated with respect to lighter weights. Given the good fitting of length–weight model, we can see that a non-linear relationship could be assumed between length and weight. Therefore, we consider a matrix with two columns constructed by these variables for the clustering modeling.

4.2.3. Clustering and Model Selection

As in Section 4.1, the length–weight data is evaluated with the FMSN model for several values m depending on the maximum age by sex. Some authors reported that maximum age in males and females reaches 9 and 11 years, respectively [29,30,31]. One of the difficulties that anal-fins readers observed was that they could find multiple annuli and disappearance of the first annulus in older fish, and thus careful interpretation is important [29]. In addition, this species were aged as younger at given body lengths, i.e., it was difficult to find older fish by selectivity [27,30]. We take into account these facts to discuss the optimal number of clusters for the classification of lengths into age classes.
To reduce the scale of the plots, in Figure 3, the logarithmic of the average between upper and lower bounds for Shannon and Rényi entropies appears, for m = 1 , , 9 in males and m = 1 , , 11 in females. It is worth pointing out that the values related to Shannon entropy (panels (a) and (c)) increase when the number of components increases in both sexes. Panels (b) and (d) show that values related to Rényi entropies increase until m = 7 and then decrease. This means that Rényi entropy bounds provide information of the models and help us to determine a criterion to choose a possible number of components on each sex group. There also exist some differences between α values, where the quadratic Rényi entropy ( α = 2 ) provides more information.
The results mentioned before are compared first with AIC and BIC criteria in Table 3. These criteria increase when the number of components increases, and minimum AIC and BIC values correspond to the simplest model m = 2 . Table 3 also shows the misclassification (MC) rates and consistency scores considered in Section 4.1. All these indicators are applied over the assigned observations for each cluster and the observed age, for each FMSN and Finite Mixture of Normal (FMN) model. The values corresponding to m = 7 clusters, marked in gray, provide the best results. The model has a classification rate of 71% and 65% for males and females, respectively; and the highest values of NSS, HSS and HK scores. The best FMN model corresponded to m = 6 and eight for males and females, respectively, where its respective classification rates were 57% and 55%.
The FMSN fits for length–weight of males and females are shown if Figure 4. The lengths of the older species presents high variability compared to younger ones. The group of males has the parameters π = ( 0.167 , 0.117 , 0.159 , 0.257 , 0.025 , 0.084 , 0.191 ) , ξ ˜ = 175.77 68.10 , 192.98 93.99 , 141.15 34.65 , 155.55 41.27 , 211.49 156.93 , 201.31 105.25 , 162.76 53.97 , Ω ˜ = ( 6.47 2.05 2.05 7.90 , 8.74 4.18 4.18 10.74 , 8.45 4.09 4.09 5.28 , 7.66 1.81 1.81 6.73 , 21.63 12.29 12.29 14.91 , 14.16 7.83 7.83 18.12 , 7.20 2.34 2.34 6.94 ) , and η ˜ = 0.87 1.06 , 0.62 1.35 , 1.28 0.99 , 1.12 1.23 , 0.91 1.19 , 0.83 0.87 , 0.82 0.73 ; and for females, π = ( 0.279 , 0.058 , 0.109 , 0.070 , 0.240 , 0.015 , 0.229 ) , ξ ˜ = 193.98 82.98 , 236.10 205.41 , 207.07 120.38 , 221.49 158.33 , 153.72 43.52 , 264.01 283.60 , 161.01 51.52 , Ω ˜ = ( 11.23 4.31 4.31 13.57 , 16.77 6.92 6.92 25.62 , 9.68 4.03 4.03 15.32 , 14.88 5.98 5.98 18.20 , 13.01 6.91 6.91 9.03 , 16.87 13.23 13.23 58.63 , 8.86 5.29 5.29 10.43 ) , and η ˜ = 0.99 0.67 , 1.13 1.25 , 1.09 1.11 , 1.18 1.33 , 1.23 0.86 , 0.20 1.49 , 0.95 0.96 .

5. Conclusions

5.1. Methodology

In this paper, lower and upper bounds of the Shannon and Rényi entropies for FMSN distributions were derived. Using such a pair of bounds, some kind of confidence interval for the approximate entropy value can be calculated, where the average between these values can be used as an approximation of the entropy. We presented practical (bounds) and theoretical (bounds and asymptotic expression) results for Rényi entropy. In the case of practical results, the first upper bound deals only with the density parameters and the second one with the density and mixing weights parameters.
Inserting the ML estimation (fixed) parameters represents the simplest evaluation of these bounds [22]. However, between the lower and upper Rényi entropy bounds exists a considerable distance. For this reason, further research must consider the exact expression and asymptotic approximation presented in this paper. In addition, the Bayesian approach allows for a direct estimation of the entropies, depending on the accuracy of prior parameters, where performance can be substantially improved compared to ML or nonparametric estimators [34].
The results presented are valid for the skew-normal case, taking the shape parameters set η = ( 0 , , 0 ) , for integer values of α [11]. However, numerical algorithms can be applied for real values of α ( α 2 ), but that requires more challenging computational work. In addition, Proposition 2 and Lemma 1 are also valid for other continuous densities where the Rényi entropies of the component exist. We hope the Rényi entropy developments in finite mixtures of densities can stimulate more research in the future, for more flexible densities such as Skew-t distribution [27].

5.2. Application

We applied two-dimensional length–weight data for the determination of swordfish age. We considered a length–weight dataset instead of the usual length (considered by [32]) to determine the number of clusters, and posteriorly, we compared it with the real observations obtained by the procedure of [30]. The best results were obtained using the Rényi entropy, as an average between upper and lower bounds, over Shannon entropy and information criteria. Additionally, the classification rates and consistency scores of FMSN models showed better results versus the FMN model.
Wrongly classified observations arise with older species because they produce higher variability in the length–weight relationship. Moreover, the age determination in these age classes is difficult to obtain for the reasons mentioned in Section 4.2. We encourage anal-fins readers to consider the proposed methodology to compare their results with this statistical methodology, especially for the revision of older species data.

Supplementary Materials

The R codes of the upper and lower bounds of Shannon and Rényi entropies are available by request to the correspondence author. The swordfish data are available by request to the Instituto de Fomento Pesquero (IFOP, Valparaíso, Chile), website: http:\www.ifop.cl.

Acknowledgments

We thank the Instituto de Fomento Pesquero (IFOP, Valparaíso, Chile), for providing the biological information used in this work. The research of J. Contreras-Reyes was supported by Comisión Nacional de Investigación Científico y Tecnológico (CONICYT, Santiago, Chile) doctoral scholarship 2016 No. 21160618 (Res. Ex. 4128/2016). We would like to thank the editor and three anonymous reviewers for their helpful comments, suggestions, and valuable discussion of this work.

Author Contributions

J. Contreras-Reyes and D. Devia Cortés conceived the experiments and analyzed the data; J. Contreras-Reyes designed and performed the experiments, contributed reagents/analysis tools and wrote the paper; and D. Devia Cortés contributed materials tools. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Proposition 2.
(i)
For any finite mixture f ( x ; θ ˜ , π ) = i = 1 m π i f ( x ; θ i ) , where θ i is the associated parameter set of each i-th component f ( x ; θ i ) , θ ˜ = ( θ 1 , , θ m ) , π i 0 , i = 1 m π i = 1 , and X R d is not necessarily normal with non-zero location vector and dispersion matrix Λ. Then,
i = 1 m π i H [ X ; θ i ] H [ X ; θ ˜ ] 1 2 ln { ( 2 π e ) d | Λ | } .
For a proof of Equation (A1), see pp. 27 and 663 of [20]. Basically, the fact that g ( t ) = ln t is a concave function ( g ( t ) is convex) allows the use of Jensen’s inequality. Then, considering the location vector (7) and dispersion matrix (8), we have the inequalities H [ Y ; θ ˜ ] A u p p e r and H [ Y ; θ ˜ ] i = 1 m π i H [ Y ; θ i ] . Considering the property (i) of Proposition 1 and the condition i = 1 m π i = 1 , we prove the left side of the inequality.
(ii)
Left side: by the property of log concavity for skew-normal densities [35] and employing Jensen’s inequality [20], the proof is analogous to Theorem 2 of [9]. Right side: see Theorem 3 of [9]. ☐
Proof of Lemma 1.
Consider the Proposition 1 (B1) of [36]. Let p 1 , then for a αth-order, 1 α p , we have
p ( y ; θ ˜ , π ) α = i = 1 m π i f ( y ; θ i ) α ( B 1 ) i = 1 m f ( y ; θ i ) p α p 1 [ i = 1 m 1 i 1 α p k = 1 i π k α f ( y ; θ i ) p f ( y ; θ i + 1 ) p + m 1 α p k = 1 m π k α f ( y ; θ m ) p ] .
By choosing p = α related to condition (iii) of Proposition 1 of [36] in Equation (A2), the following equality holds
p ( y ; θ ˜ , π ) α f ( y ; θ m ) α + i = 1 m 1 k = 1 i π k α f ( y ; θ i ) α f ( y ; θ i + 1 ) α .
The conditions (i), (ii) and (iv) of Proposition 1 of [36] can not be accomplished given the Rényi entropy conditions of Equation (9), and thus the equality in (A3) is not accomplished. Finally, integrating both sides of (A3) the result is obtained. ☐

References

  1. McLachlan, G.; Peel, D. Finite Mixture Models; John Wiley Sons: New York, NY, USA, 2000. [Google Scholar]
  2. Celeux, G.; Soromenho, G. An entropy criterion for assessing the number of clusters in a mixture model. J. Classif. 1996, 13, 195–212. [Google Scholar] [CrossRef]
  3. Jenssen, R.; Hild, K.E.; Erdogmus, D.; Principe, J.C.; Eltoft, T. Clustering using Renyi’s entropy. IEEE Proc. Int. Jt. Conf. Neural Netw. 2003, 1, 523–528. [Google Scholar]
  4. Amoud, H.; Snoussi, H.; Hewson, D.; Doussot, M.; Duchêne, J. Intrinsic mode entropy for nonlinear discriminant analysis. IEEE Signal Process. Lett. 2007, 14, 297–300. [Google Scholar] [CrossRef]
  5. Caillol, H.; Pieczynski, W.; Hillion, A. Estimation of fuzzy Gaussian mixture and unsupervised statistical image segmentation. IEEE Trans. Image Process. 1997, 6, 425–440. [Google Scholar] [CrossRef] [PubMed]
  6. Carreira-Perpiñán, M.A. Mode-finding for mixtures of Gaussian distributions. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1318–1323. [Google Scholar] [CrossRef]
  7. Durrieu, J.-L.; Thiran, J.; Kelly, F. Lower and upper bounds for approximation of the Kullback–Leibler divergence between Gaussian mixture models. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 4833–4836.
  8. Nielsen, F.; Sun, K. Guaranteed bounds on the Kullback–Leibler divergence of univariate mixtures. IEEE Signal Process. Lett. 2016, 23, 1543–1546. [Google Scholar] [CrossRef]
  9. Huber, M.F.; Bailey, T.; Durrant-Whyte, H.; Hanebeck, U.D. On entropy approximation for Gaussian mixture random vectors. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Korea, 20–22 August 2008; pp. 181–188.
  10. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; Neyman, J., Ed.; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 547–561. [Google Scholar]
  11. Contreras-Reyes, J.E. Rényi entropy and complexity measure for skew-gaussian distributions and related families. Physica A 2015, 433, 84–91. [Google Scholar] [CrossRef]
  12. Zografos, K.; Nadarajah, S. Expressions for Rényi and Shannon entropies for multivariate distributions. Stat. Probab. Lett. 2005, 71, 71–84. [Google Scholar] [CrossRef]
  13. Lin, T.I. Maximum likelihood estimation for multivariate skew normal mixture models. J. Multivar. Anal. 2009, 100, 257–265. [Google Scholar] [CrossRef]
  14. Frühwirth-Schnatter, S.; Pyne, S. Bayesian inference for finite mixtures of univariate and multivariate skew-normal and skew-t distributions. Biostatistics 2010, 11, 317–336. [Google Scholar] [CrossRef] [PubMed]
  15. Lee, S.X.; McLachlan, G.J. On mixtures of skew normal and skew t-distributions. Adv. Data Anal. Classif. 2013, 7, 241–266. [Google Scholar] [CrossRef]
  16. Lee, S.X.; McLachlan, G.J. Model-based clustering and classification with non-normal mixture distributions. Stat. Meth. Appl. 2013, 22, 427–454. [Google Scholar] [CrossRef]
  17. Lin, T.I.; Ho, H.J.; Lee, C.R. Flexible mixture modelling using the multivariate skew-t-normal distribution. Stat. Comput. 2014, 24, 531–546. [Google Scholar] [CrossRef]
  18. Azzalini, A.; Dalla-Valle, A. The multivariate skew-normal distribution. Biometrika 1996, 83, 715–726. [Google Scholar] [CrossRef]
  19. Azzalini, A.; Capitanio, A. Statistical applications of the multivariate skew normal distributions. J. R. Stat. Soc. Ser. B 1999, 61, 579–602. [Google Scholar] [CrossRef]
  20. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley Son, Inc.: New York, NY, USA, 2006. [Google Scholar]
  21. Schrödinger, E. What is Life—The Physical Aspect of the Living Cell; Cambridge University Press: Cambridge, UK, 1944. [Google Scholar]
  22. Contreras-Reyes, J.E.; Arellano-Valle, R.B. Kullback–Leibler divergence measure for multivariate skew-normal distributions. Entropy 2012, 14, 1606–1626. [Google Scholar] [CrossRef]
  23. Sánchez-Moreno, P.; Zozor, S.; Dehesa, J.S. Upper bounds on Shannon and Rényi entropies for central potentials. J. Math. Phys. 2011, 52, 022105. [Google Scholar] [CrossRef]
  24. Prates, M.O.; Lachos, V.H.; Cabral, C. Mixsmsn: Fitting finite mixture of scale mixture of skew-normal distributions. J. Stat. Softw. 2013, 54, 1–20. [Google Scholar] [CrossRef]
  25. Lee, S.X.; McLachlan, G.J. EMMIXuskew: An R package for fitting mixtures of multivariate skew t-distributions via the EM algorithm. J. Stat. Softw. 2013, 55, 1–22. [Google Scholar] [CrossRef]
  26. R Core Team. A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2015. [Google Scholar]
  27. Contreras-Reyes, J.E.; Arellano-Valle, R.B.; Canales, T.M. Comparing growth curves with asymmetric heavy-tailed errors: Application to the southern blue whiting (Micromesistius australis). Fish. Res. 2014, 159, 88–94. [Google Scholar] [CrossRef]
  28. Contreras-Reyes, J.E. Nonparametric assessment of aftershock clusters of the maule earthquake Mw = 8.8. J. Data Sci. 2013, 11, 623–638. [Google Scholar]
  29. Quelle, P.; González, F.; Ruiz, M.; Valeiras, X.; Gutierrez, O.; Rodriguez-Marin, E.; Mejuto, J. An approach to age and growth of south Atlantic swordfish (Xiphias gladius) Stock. Collect. Vol. Sci. Pap. ICCAT 2014, 70, 1927–1944. [Google Scholar]
  30. Cerna, J.F. Age and growth of the swordfish (Xiphias gladius Linnaeus, 1758) in the southeastern Pacific off Chile. Lat. Am. J. Aquat. Res. 2009, 37, 59–69. [Google Scholar] [CrossRef]
  31. Sun, C.L.; Wang, S.P.; Yeh, S.Z. Age and growth of the swordfish (Xiphias gladius L.) in the waters around Taiwan determined from anal-fin rays. Fish. Bull. 2002, 100, 822–835. [Google Scholar]
  32. Roa-Ureta, R.H. A likelihood-based model of fish growth with multiple length frequency data. J. Agric. Biol. Environ. Stat. 2010, 15, 416–429. [Google Scholar] [CrossRef]
  33. Contreras-Reyes, J.E. Analyzing fish condition factor index through skew-gaussian information theory quantifiers. Fluct. Noise Lett. 2016, 15, 1650013. [Google Scholar] [CrossRef]
  34. Gupta, M.; Srivastava, S. Parametric Bayesian estimation of differential entropy and relative entropy. Entropy 2010, 12, 818–843. [Google Scholar] [CrossRef]
  35. Gupta, R.C.; Brown, N. Reliability studies of the skew-normal distribution and its application to a strength-stress model. Commun. Stat. Theory Methods 2001, 30, 2427–2445. [Google Scholar] [CrossRef]
  36. Bennett, G. Lower bounds for matrices. Linear Algebra Appl. 1986, 82, 81–98. [Google Scholar] [CrossRef]
Figure 1. Finite mixtures of skew-normal (FMSN) density simulations using samples of 500 generations and the settings given by the examples (a) 1, (b) 2, (c) 3, (d) 4, (e) 5, and (f) 6 of Section 4.1.
Figure 1. Finite mixtures of skew-normal (FMSN) density simulations using samples of 500 generations and the settings given by the examples (a) 1, (b) 2, (c) 3, (d) 4, (e) 5, and (f) 6 of Section 4.1.
Entropy 18 00382 g001
Figure 2. Length–weight log-transformed relationship and regression fit (red solid line) for (a) male and (b) female swordfish.
Figure 2. Length–weight log-transformed relationship and regression fit (red solid line) for (a) male and (b) female swordfish.
Entropy 18 00382 g002
Figure 3. Logarithmic of the average between upper and lower bounds for Shannon and Rényi entropies, for (a,c) males and (b,d) females, respectively.
Figure 3. Logarithmic of the average between upper and lower bounds for Shannon and Rényi entropies, for (a,c) males and (b,d) females, respectively.
Entropy 18 00382 g003
Figure 4. Selected Finite Mixture of Skew-Normal (FMSN) fits for (a) males and (b) females. Each color corresponds to each mixture component from a total of m = 7 .
Figure 4. Selected Finite Mixture of Skew-Normal (FMSN) fits for (a) males and (b) females. Each color corresponds to each mixture component from a total of m = 7 .
Entropy 18 00382 g004
Table 1. Simulated Shannon and Rényi entropy bounds. Rényi entropy bounds are computed for α = 2 , , 5 . For each model and number of clusters m, the Akaike (AIC) and Bayesian (BIC) information criteria, misclassification (MC), normal skill (NSS), Heidke skill (HSS), and Hanssen–Kuipers (HK) scores appear. The shaded regions highlight the smallest AIC and BIC values.
Table 1. Simulated Shannon and Rényi entropy bounds. Rényi entropy bounds are computed for α = 2 , , 5 . For each model and number of clusters m, the Akaike (AIC) and Bayesian (BIC) information criteria, misclassification (MC), normal skill (NSS), Heidke skill (HSS), and Hanssen–Kuipers (HK) scores appear. The shaded regions highlight the smallest AIC and BIC values.
H R α Lower R α Upper
Example m MCNSSHSSHKAICBIC A lower A upper B lower B upper 23452345
120.020.980.950.394831.884866.230.992.200.8881.580.720.961.011.033.543.062.892.80
30.030.970.930.424840.364894.341.892.791.582.920.580.840.910.943.523.042.872.78
40.020.980.960.414846.624920.231.852.801.443.030.580.850.930.973.523.042.872.78
50.610.390.010.014851.354944.592.152.911.933.590.500.630.650.653.503.012.852.76
60.770.230.000.004858.874971.752.192.922.043.740.590.700.700.703.503.012.852.76
220.010.990.980.406581.526615.872.573.920.663.261.181.291.321.334.934.454.284.19
30.001.001.000.626065.256119.232.964.250.753.991.061.321.381.404.974.494.324.23
40.510.490.130.086071.556145.162.984.250.784.320.791.061.131.164.954.474.304.21
50.600.400.000.006080.716173.953.284.261.554.810.831.061.111.124.954.474.304.21
60.590.410.000.006090.826203.703.624.392.325.330.730.900.940.944.954.474.304.21
320.001.001.000.465766.825840.433.443.942.844.091.681.611.551.505.274.674.464.35
31.000.00−0.96−0.495778.715891.593.664.451.844.720.790.950.980.996.285.695.485.36
41.000.00−0.98−0.505785.435937.573.784.621.735.130.620.800.840.866.626.035.825.70
51.000.00−0.24−0.195798.105989.513.894.661.925.310.720.840.850.846.716.115.905.79
60.260.740.000.005758.535989.204.014.791.885.670.640.790.810.816.976.386.176.05
420.760.24−0.11−0.078758.638832.253.354.610.823.921.641.942.022.056.606.005.795.68
30.300.700.140.058282.848395.724.114.842.065.151.361.531.561.567.246.656.446.32
40.360.640.460.318295.268447.404.515.242.065.711.731.831.841.837.867.277.066.94
50.360.640.470.328300.948492.344.415.241.785.791.271.391.401.407.867.277.066.95
60.570.430.210.158246.468477.124.645.431.886.171.291.421.431.428.237.647.437.32
520.640.36−0.03−0.029650.239772.925.576.521.616.252.432.542.522.4810.269.569.309.17
30.450.550.020.019510.539697.025.666.731.286.651.842.072.132.1510.539.829.579.43
40.530.470.190.139513.739764.025.836.781.437.191.681.901.961.9811.0610.3610.109.97
50.920.08−0.23−0.179539.029853.115.996.901.527.491.601.781.801.7911.4110.7010.4510.31
60.680.320.050.049550.449928.335.936.851.517.631.541.731.761.7711.2610.5610.3010.17
620.620.38−0.04−0.0233479.0133601.7015.5616.930.6416.257.537.707.757.7741.5040.7940.5440.40
31.000.00−0.47−0.3233019.8333206.3316.8818.180.7517.847.457.717.797.8245.2644.5544.3044.16
40.450.550.290.1832417.8032668.1017.3118.620.7318.657.457.777.877.9245.8245.1144.8644.72
50.930.07−0.15−0.1232346.2932660.3917.3118.630.7118.786.957.137.157.1546.6145.9145.6545.52
60.560.440.200.1432458.0532835.9517.5418.850.7319.207.057.277.327.3347.2646.5646.3046.17
Table 2. Summary of estimates α = log α and β with their respective standard errors in parenthesis, for each length–weight log-transformed relationships of Equation (14) and sex.
Table 2. Summary of estimates α = log α and β with their respective standard errors in parenthesis, for each length–weight log-transformed relationships of Equation (14) and sex.
SexParameterEstimate (SE)t-Valuep-Value R 2 (%)
Male α −11.619 (0.202)−57.53 0.01 92.6
β3.064 (0.040)77.58 0.01
Female α −12.413 (0.176)−70.43 0.01 94.7
β3.218 (0.034)94.95 0.01
Table 3. Summary of Finite Mixture of Skew-Normal (FMSN) and Finite Mixture of Normal (FMN) clustering. The shaded regions highlight the smallest Akaike (AIC) and Bayesian (BIC) information criteria values. For each model and number of clusters m the misclassification (MC), normal skill (NSS), Heidke skill (HSS), and Hanssen–Kuipers (HK) scores appear.
Table 3. Summary of Finite Mixture of Skew-Normal (FMSN) and Finite Mixture of Normal (FMN) clustering. The shaded regions highlight the smallest Akaike (AIC) and Bayesian (BIC) information criteria values. For each model and number of clusters m the misclassification (MC), normal skill (NSS), Heidke skill (HSS), and Hanssen–Kuipers (HK) scores appear.
ModelmMaleFemale
MCNSSHSSHKAICBICMCNSSHSSHKAICBIC
FMSN20.700.300.010.017742.247805.030.750.250.000.008834.898898.32
30.770.23−0.05−0.047754.187850.460.870.13−0.05−0.048844.918942.17
40.620.380.140.107741.227871.000.890.11−0.09−0.078838.638969.71
50.420.580.420.307751.477914.730.900.10−0.10−0.088847.759012.67
60.450.550.430.357760.767957.510.830.17−0.04−0.038864.749063.48
70.290.710.610.467770.858001.100.350.650.560.468865.469098.03
80.510.490.370.307783.318047.050.690.310.140.118879.209145.59
90.650.350.220.187769.488066.700.490.510.420.358885.989186.20
10------0.590.410.310.278897.569231.61
11------0.650.350.260.228900.879268.75
FMN20.700.300.020.017818.797864.840.750.250.000.008914.328960.83
30.780.22−0.04−0.037737.237808.400.880.12−0.03−0.038848.438920.31
40.520.480.280.207729.777826.050.870.13−0.08−0.068818.258915.51
50.430.570.430.327733.337854.730.820.18−0.05−0.048820.128942.74
60.430.570.460.367744.007890.520.700.300.110.098831.168979.16
70.530.470.350.297738.417910.040.660.340.170.148839.639013.00
80.520.480.360.297750.277947.020.450.550.470.398849.829048.56
90.850.15−0.01−0.017751.247973.110.500.500.410.358855.109079.21
10------0.780.220.110.108857.499106.97
11------0.620.380.290.258852.379127.22

Share and Cite

MDPI and ACS Style

Contreras-Reyes, J.E.; Cortés, D.D. Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus). Entropy 2016, 18, 382. https://doi.org/10.3390/e18110382

AMA Style

Contreras-Reyes JE, Cortés DD. Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus). Entropy. 2016; 18(11):382. https://doi.org/10.3390/e18110382

Chicago/Turabian Style

Contreras-Reyes, Javier E., and Daniel Devia Cortés. 2016. "Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus)" Entropy 18, no. 11: 382. https://doi.org/10.3390/e18110382

APA Style

Contreras-Reyes, J. E., & Cortés, D. D. (2016). Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus). Entropy, 18(11), 382. https://doi.org/10.3390/e18110382

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop