Next Article in Journal
Application of a Gas-Kinetic BGK Scheme in Thermal Protection System Analysis for Hypersonic Vehicles
Next Article in Special Issue
A New Class of Weighted CUSUM Statistics
Previous Article in Journal / Special Issue
Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm

1
School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics, Chengdu 611130, China
2
Department of Computer Science, Harbin Finance University, Harbin 150030, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(10), 1324; https://doi.org/10.3390/e24101324
Submission received: 17 August 2022 / Revised: 11 September 2022 / Accepted: 19 September 2022 / Published: 21 September 2022
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)

Abstract

:
Accurate clustering is a challenging task with unlabeled data. Ensemble clustering aims to combine sets of base clusterings to obtain a better and more stable clustering and has shown its ability to improve clustering accuracy. Dense representation ensemble clustering (DREC) and entropy-based locally weighted ensemble clustering (ELWEC) are two typical methods for ensemble clustering. However, DREC treats each microcluster equally and hence, ignores the differences between each microcluster, while ELWEC conducts clustering on clusters rather than microclusters and ignores the sample–cluster relationship. To address these issues, a divergence-based locally weighted ensemble clustering with dictionary learning (DLWECDL) is proposed in this paper. Specifically, the DLWECDL consists of four phases. First, the clusters from the base clustering are used to generate microclusters. Second, a Kullback–Leibler divergence-based ensemble-driven cluster index is used to measure the weight of each microcluster. With these weights, an ensemble clustering algorithm with dictionary learning and the L 2 , 1 -norm is employed in the third phase. Meanwhile, the objective function is resolved by optimizing four subproblems and a similarity matrix is learned. Finally, a normalized cut (Ncut) is used to partition the similarity matrix and the ensemble clustering results are obtained. In this study, the proposed DLWECDL was validated on 20 widely used datasets and compared to some other state-of-the-art ensemble clustering methods. The experimental results demonstrated that the proposed DLWECDL is a very promising method for ensemble clustering.

1. Introduction

For a long time, clustering has been widely studied as an important technology for machine learning [1,2,3,4]. However, due to the lack of prior knowledge, i.e., pre-label training, the accuracy of clustering algorithms is much lower than that of supervised learning methods. Traditional single clustering methods, such as k-means, balanced iterative reducing and clustering using hierarchies (BIRCH), density-based spatial clustering of applications with noise (DBSCAN), etc., cannot usually achieve good clustering results for complex data [5,6]. Encouraged by the accuracy improvement effects of ensemble learning methods, many researchers have begun to study clustering ensemble algorithms. Clustering ensembles learn from multiple base clustering results to obtain consensus results, which can greatly improve the clustering accuracy without the need for prior knowledge [7,8,9,10,11,12].
The focuses of ensemble clustering methods are either the selection of base clustering or ensemble methods [13]. The selection of base clustering has two influences on the consensus results: accuracy and diversity. Higher accuracy usually leads to the lower diversity of the base clustering, while higher diversity results in the lower accuracy of the base clustering [14]. Therefore, balancing these two factors is key in the selection of base clustering. Ensemble methods aim to learn more robust consensus results by mining more effective information from the base clustering sets. Essentially, ensemble methods mine more inner information from the base clusterings. Although there are many robust ensemble methods, it is difficult to identify which ensemble method outperforms the others on a given dataset due to the randomness of the base clustering selection and the diversity of datasets.
Generally speaking, the most commonly used representative methods for mining this information from base clusterings include (1) co-association (CA) matrices, which represent the mutual relationships between samples in the base clustering sets, i.e., relationships at the sample level, (2) cluster–cluster (CC) matrices, which indicate the relationships between clusters in base clustering sets, i.e., relationships at the cluster level, and (3) sample–cluster matrices, which represent the relationships between samples and clusters in base clustering sets, i.e, relationships at the sample–cluster level. Both CA and CC matrices can be calculated using sample–cluster matrices. CA matrices reveal the probability that samples are of the same class. The larger the value of X i j in a CA matrix, the greater the possibility that the samples i and j are of the same class. Some methods aim to retain or learn reliable samples in CA matrices and then seek consensus results [10,14]. For example, Jia et al. proposed an effective self-enhancement framework for CA matrices to improve the ensemble clustering results, through which high-confidence information was extracted from base clusterings [15]. CC matrices reveal the similarities between clusters, which cannot be used for ensembles alone due to the lack of effective information, so it has to be combined with other valid information to perform accurate clustering. Therefore, some researchers have used CC matrices to calculate similarities and then mapped them as weights to CA matrices or sample–cluster matrices [11,16]. Sample–cluster matrices are the original matrices in base clustering sets and retain the most complete information in base clustering sets. Some methods choose to explore hidden information in the original matrices [11]. For example, based on sample–cluster matrices, the dense representation ensemble clustering (DREC) method introduces microcluster representation, reduces the amount of data, retains the effective information from sample–cluster matrices to the greatest extent and then performs dense representation clustering, which not only improves the time performance but also explores the hidden effective information to the greatest extent [13]. Huang et al. pointed out that the differences between microclusters also play important roles in ensemble clustering [17]. However, the DREC method ignores the differences between microclusters. Moreover, it does not reveal the underlying structures in sample–cluster matrices well. Entropy-based locally weighted ensemble clustering (ELWEC) has been demonstrated as being effective in improving clustering accuracy [18]. The key reason for this is the adoption of the idea of mapping entropy-based local weights to clustering. However, the ELWEC method measures the weights of clusters rather than microclusters and ignores sample–cluster relationships, thereby limiting the clustering performance to some extent. Very recently, the Markov process [19], a growing tree model [20], a low-rank tensor approximation [21] and an equivalence granularity [22] have been applied to ensemble clustering to achieve better clustering results.
Motivated by the above analysis, a divergence-based locally weighted ensemble clustering with dictionary learning (DLWECDL) is proposed in this paper. The idea of local weights was introduced to the DLWECDL. Different from the entropy-based local weights of clusters in ELWEC, this study used the divergence-based local weights of microclusters for ensemble clustering. Specifically, low-rank representation, the L 2 , 1 -norm and dictionary learning were applied to design the objective function and the corresponding constraints. We used the augmented Lagrange multiplier (ALM) with alternating direction minimization (ADM) strategy for the optimization of the objective function. Extensive experiments on real datasets demonstrated the effectiveness of our proposed method.
The main contributions of this paper are summarized as follows:
(1)
The proposal of a Kullback–Leibler divergence-based weighted method to better reveal relationships between clusters;
(2)
The use of low-rank representation instead of dense representation to better explore hidden effective information and low-rank structures of original matrices;
(3)
The application of the L 2 , 1 -norm to noise to improve robustness;
(4)
The introduction of adaptive dictionary learning to better learn low-rank structures;
(5)
Extensive experiments to demonstrate that the proposed DLWECDL can significantly outperform other state-of-the-art approaches.
The rest of this paper is organized as follows. Section 2 reviews related works on ensemble clustering. The proposed ensemble clustering method is described in detail in Section 3. The experimental settings and results are analyzed and discussed in Section 4. Finally, Section 5 concludes the paper and provides our recommendations for future work.

2. Related Works

2.1. Ensemble Clustering

The goal of ensemble clustering is to find consensus results based on M base clusterings. To obtain good consensus results, two questions naturally arise. The first question is the selection of the base clusterings, which should not only ensure the diversity of the base clusterings but also the quality or accuracy of the base clusterings. Existing studies have proposed some methods that take into account the diversity and quality of base clusterings [23,24]. The second question is the ensemble method, which is roughly divided into two categories: similarity matrix-based learning and graph-based learning. Similarity matrices are the core problems in various clustering methods. In ensemble clustering, similarity matrices are obtained by exploring sample–sample, cluster–cluster and sample–cluster relationship matrices and then using spectral clustering to obtain the final clustering results.
Based on similarity matrices, our method follows a dense representation ensemble clustering framework, finds microclusters and then performs dense representation at the microcluster level. However, it does not work for microclusters that contain more samples. Therefore, we designed a local weight-based microcluster ensemble method and used a new low-rank representation clustering method. Inspired by the ALRR method [25], we introduced the L 2 , 1 -norm and adaptive dictionary learning to the new low-rank representation method.

2.2. Microcluster Representatives

Our approach starts by finding microcluster representatives to simplify the problem. A sample–cluster matrix needs to be reconstructed before looking for these microcluster representatives.
Figure 1 is an example that illustrates our definition of a microcluster, where C i is the i-th base clustering, X j represents the j-th sample and the numbers 1–7 in the heading of the full data matrix are the global renamed cluster IDs. We reconstructed the original base clustering results to obtain the full data matrix, in which we observed that the information in samples X 1 and X 2 was completely consistent. Therefore, we grouped X 1 and X 2 into the same microcluster and chose either X 1 or X 2 as the microcluster representative.

2.3. Information Entropy-Based Locally Weighted Method

The information entropy-based locally weighted method mainly explores the uncertainty of each cluster [18]. It introduces the concept of entropy to calculate the uncertainty of each cluster and then determines the weight of each cluster using a monotonically decreasing function. It forms results based on the more stable cluster, the smaller the uncertainty and the larger the weight. However, for similar clusters, it cannot guarantee that the final weights are consistent, even though the weights of completely different clusters may be consistent.
We used the locally weighted method for microclusters to calculate the weight of each cluster in each base clustering and then apply it to the microclusters. The weights were measured using the ensemble-driven cluster index (ECI).
Taking the cluster in the i-th base clustering π i as an example, the weights were calculated as follows:
H π i = m = 1 M j = 1 K p π i , π j m l o g 2 p π i , π j m
p π i , π j m = π i π j m π i
E C I π i = e H π i θ M
where − represents the subtraction operation, θ is a control parameter and  | C i | represents the number of samples in C i . After obtaining the ECI weight of each cluster, we applied them to the selected representative microcluster matrix to obtain the final data matrix.

2.4. Dense Representation Ensemble Clustering

The concept of microclusters has been introduced into the DREC method. The scale of ensemble clustering problems is simplified using the “slim-down strategy” and then similarity matrices can be obtained using the dense representation method and the final result segmentation can be obtained by applying the Ncut algorithm. Because of the microclusters, the DREC method improves time efficiency and preserves more original information. However, the DREC method treats “shrunk” samples equally, which does not work for microcluster samples. At the same time, although the DREC method considers the influence of noise, it fails to consider the selection of the base clusterings, which also leads to the instability of the final results of the randomly selected base clustering integration.

3. Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning (DLWECDL)

The goal of ensemble clustering is to learn consistent results based on M base clusterings. In ensemble clustering, the key is to explore the effective information in base clustering sets. The effective information in base clustering sets is hidden within three common manifestations, namely sample–sample relational representation, sample–cluster relational representation and cluster–cluster relational representation. We believe that good consensus results can be obtained when all of the valid information from the three representations can be fully utilized. Sample–cluster relationship matrices are key to linking these three representations because they can be used to calculate the remaining two representations. Therefore, we took the sample–cluster relational representation as the base representation and used it as the data matrix for our method. It was the original representation of our base clustering set.

3.1. Divergence-Based Locally Weighted Method

The information entropy-based locally weighted method mainly considers the uncertainty between clusters.We introduced the Kullback–Leibler (KL) divergence, which is widely used to measure the differences between distributions. When distributions are exactly the same, the KL divergence is 0. Considering the good performance of KL divergence in some clustering methods over recent years, we introduced KL divergence as a measure of local weights. Since p π i , π j m and p π j m , π i are not clear probabilistic interpretations, the KL divergence results here were not guaranteed to always be greater than 0. After obtaining the KL divergence, we used the ECI entropy mapping function to obtain the new KL divergence weights.
K L π i = m = 1 M j = 1 K p π i , π j m l o g 2 p ( π i , π j m ) p ( π j m , π i )
To better illustrate the advantages of KL divergence weighting, an example is presented in Figure 2, where C i represents the i-th base clustering result, π i j denotes the j-th cluster in the i-th base clustering and the numbers 1–12 in the circles are the numbers of the samples. As shown in Table 1, we compared the results of the inter-cluster entropy calculation and the KL divergence calculation, where R represents the ratio of the maximum number of samples in the stable subsets to the number of samples that were contained in the clusters. For example, Samples 1, 2 and 3 were assigned to π 1 1 , π 2 1 and  π 3 1 in the base clusterings C 1 , C 2 and  C 3 , respectively. The three samples were classified into the same class in different base clustering results. This meant that the most stable subsets of π 1 1 , π 2 1 and  π 3 1 were Sample 1, Sample 2 and Sample 3, respectively. Therefore, the R values for clusters π 1 1 , π 2 1 and  π 3 1 were 3 3 = 1 , 3 5 = 0.6 and  3 4 = 0.75 , respectively. It can be observed from Table 1 that the R values of π 1 3 , π 1 4 and  π 2 3 were consistent but the entropy values were quite different. This led to inconsistent weights. The same situation occurred in π 2 2 and π 3 2 . The KL divergence method reduced the gaps between clusters with the same R values as much as possible so that the weights were as consistent as possible.

3.2. L 2 , 1 -Norm Subspace Clustering of Adaptive Dictionaries

After obtaining the final data matrix, we developed a new subspace clustering method. Unlike dense representation, we explored similarity matrices using low-rank representation, which incorporated an adaptive dictionary learning strategy and employed a new regularization term, i.e., the L 2 , 1 -norm.
The original low-rank subspace clustering that could explore similarity matrices was formulated as follows:
min Z , E Z * + λ E 2 , 1 s . t . X = D Z + E
where λ is a regularization parameter, X represents the data matrix, D is the dictionary, Z is the low-rank representation coefficient matrix, E is the noise and | | . | | * and | | . | | 2 , 1 represent the nuclear norm and the L 2 , 1 -norm of the matrix, respectively. The original low-rank representation method the data X as a dictionary D. On this basis, many low-rank representation subspace clustering algorithms have been further proposed and the adaptive dictionary learning low-rank representation [25] problem can be formulated as follows:
min Z , E Z * + λ X D Z F 2 s . t . D D T = I d
where | | . | | F is the famous Frobenius norm, which was used here for computational convenience because many closed-form solutions that are based on this norm can greatly improve time efficiency. In order to eliminate the arbitrary scaling factor in the process of dictionary learning, D and X were replaced by P T X . To take into account the advantages of dictionary learning and noise immunity, our method was formulated as follows:
min Z , P , E Z * + λ E 2 , 1 , s . t . X = P T X Z + E P T X X T P = I d
where P denotes a low-dimensional projection matrix and I d is the identity matrix. The proposed method not only retains dictionary learning in low-rank representation, i.e., learning better and more orthogonal dictionaries, but also adopts the L 2 , 1 -norm to make it more robust to noise. A widely accepted theory is that high-dimensional data are determined by low-dimensional structures. The low-rank matrix Z that was obtained according to the objective function contained the angle information between the data samples. We performed SVD decomposition on the low-rank matrix Z and obtained H = U Σ 1 2 . We then used H to obtain the final similarity matrix W.
The detailed steps of the proposed DLWECDL are described in Algorithm 1, a flowchart for which is also shown in Figure 3. It should be noted that α in Algorithm 1 is a positive integer parameter and h i and h j are the i-th and j-th rows of matrix H, respectively.
Algorithm 1:Divergence-based locally weighted ensemble clustering with dictionary learning (DLWECDL).
Input: M base clustering, C 1 , C 2 , , C m .
Output: Consensus clustering result S
1. Reconstruct the data matrix to obtain a microcluster representative matrix.
2. Calculate the local divergence weight and local entropy weight and weigh the microcluster representative matrix.
3. Learn low-rank structures Z by Low-rank representation with adaptive dictionary learning and the L 2 , 1 -norm.
4. Calculate H by SVD decomposition of Z. Calculate similarity matrix W by H.
                H = U Σ 1 2 , Z = S V D ( U Σ V T ), [ W ] i j = ( h i h j T h i 2 h j 2 ) 2 α
5. Perform Ncut to partition the similarity matrix W.
6. Obtain consensus result S by microcluster representative label mapping.
As shown in Figure 3, DLWECDL first introduces microclusters to reduce the amount of data, which reduces data redundancy and improves time efficiency. Then, DLWECDL performs local weighting on the simplified dataset. Two weighting methods, namely entropy-based and KL divergence-based weighting, are used to better represent the microclusters. Theoretically, the entropy-based weighting method focuses more on the uncertainty of the clusters themselves while the KL divergence-based method focuses more on the relative uncertainty, i.e., the differences between clusters. This also means that datasets with more diverse base clusterings may be more suitable for the KL divergence-based weighting method. The third step uses low-rank representation with dictionary learning and the L 2 , 1 -norm to explore deep structures. After using the Ncut method to partition the data, the labels of the reduced dataset need to be mapped to the full dataset because of the introduction of the microclusters.
To demonstrate the feasibility and effectiveness of the proposed algorithm more intuitively, an example on a 2D synthetic dataset is presented in Figure 4. In the example, k-means clustering algorithms with different ks were performed 20 times. Their outputs were used to generate the microclusters, from which a matrix of the KL divergence weights was obtained. Then, low-rank representation with adaptive dictionary learning and the L 2 , 1 -norm was applied to the weighted matrix to obtain an affinity matrix and the corresponding labels for the microclusters. Finally, the labels were mapped to obtain the final results of the proposed DLWECDL. In Figure 4, the microclusters, KL divergence weights, affinity matrix and labels are the intermediate data of the proposed DLWECDL.

3.3. Optimization Method

For Problem (7), we employed the augmented Lagrange multiplier (ALM) with alternating direction minimization (ADM) strategy for optimization [26]. The auxiliary variable J needed to be introduced here. The augmented Lagrangian function is as follows:
L = J * + λ E 2 , 1 + tr Y 1 T X P T X Z E + tr Y 2 T ( Z J ) + μ 2 X P T X Z E F 2 + Z J F 2 , s . t . P T X X T P = I d
where Y 1 and Y 2 are Lagrange multipliers and μ is a penalty parameter. According to the ADM strategy [26], we divided the objective into several subproblems that could be efficiently optimized.

3.3.1. Subproblem J

To update J, we needed to solve the following problem:
J * = argmin J 1 μ J * + 1 2 J Z + Y 2 / μ F 2
Problem (9) had a popular closed-form solution, which was solved using SVD decomposition. It was consistent with the first solution of the LRR method.

3.3.2. Subproblem Z

To update Z, we needed to solve the following problem:
Z * = argmin Z t r ( Y 1 T ( X P T X Z E ) ) + t r ( Y 2 T ( Z J ) ) + μ 2 ( | | X P T X Z E | | F 2 + | | Z J | | F 2 )
Since Problem (9) was unconstrained, we could take the derivative of Z directly. We obtained the derivation result of Problem (10) as follows:
L Z = X T P Y 1 + Y 2 + μ 2 ( 2 X T P X + 2 X T P P T X Z + 2 X T P E + Z J )
Let L Z = 0 , then we could obtain the result of Z as follows:
Z * = X T P P T X + I 1 X T P Y 1 Y 2 μ + J + X T P ( X E )

3.3.3. Subproblem E

To update E, we needed to solve the following problem:
E * = argmin E λ μ | | E | | 2 , 1 + 1 2 | | E ( X P T X Z + Y 1 / μ ) | | F 2
As with Problem (9), Problem (13) also had a closed-form solution. We calculated E using Lemma 1.
Lemma 1.
Let Q = [ q 1 , q 2 , , q i , ] be a given matrix. When the optimal solution to
min W λ W 2 , 1 + 1 2 W Q F 2
is W * , then the i th column of W * is
W * : , i = q i 2 λ q i 2 q i if λ < q i 2 0 o t h e r w i s e

3.3.4. Subproblem P

To update P, we needed to solve the following problem:
P * = argmin P t r ( Y 1 T ( X P T X Z E ) ) + μ 2 | | X P T X Z E | | F 2 s . t . P T X X T P = I d
Considering that Problem (16) was a constrained problem, we introduced Lemma 2 to solve it.
Lemma 2.
Given the objective function min R Q G R F 2 · s . t . R T R = R R T = I , the optimal solution is R = U V T , where U and V are the left and right singular values of the SVD decomposition of G T Q , respectively.
We transformed Problem (16) to obtain the following results:
P * = argmin P μ 2 | | X P T X Z E + Y 1 / μ | | F 2 s . t . P T X X T P = I d
Going one step further:
P * = argmin P μ 2 | | ( X + Y 1 / μ E ) T Z T X T P | | F 2 s . t . P T X X T P = I d
Let X T P = R , then according to Lemma 2, we could obtain the equation X T P = U V T . Then, we only needed to calculate the inverse of the data matrix to obtain the solution to Problem (16): P = ( X T ) 1 U V T .
The detailed optimization algorithm for DLWECDL is shown in Algorithm 2.
Algorithm 2:Optimization algorithm for DLWECDL.
Entropy 24 01324 i001

3.3.5. Differences between Our Approach and Other Ensemble Clustering Methods

As mentioned in the Introduction, our method introduces the theory of microclusters in order to reduce the dataset size. The divergence weights are then calculated and applied to the microclusters. Finally, a low-rank representation is performed to obtain a similarity matrix. Compared to other existing advanced methods, our method has a great number of differences and advantages, mainly in the following aspects:
(1)
Differences in the data matrix. Some methods perform ensemble algorithms based on co-association (CA) matrices [10,27], but CA matrices focus on instance-level relationships and ignore the relationships between clusters. Our method is based on instance–cluster data matrices, although the DREC [13], PTA-CL [17] and CESHL [11] methods also use data matrices that are similar to ours. Among these methods, CESHL does not introduce microclusters and its time efficiency is low. DREC fails to consider the differences between microclusters. Our method makes up for these shortcomings. It is worth pointing out that although the PTA-CL method considers the differences between microclusters, it does not explore their deep structures.
(2)
Differences in the weighting methods. The LWEC method is based on the entropy-based weighting method [18]. As shown in Section 3.1, the entropy-based weighted method cannot solve the problem of consistent weights among the similar clusters. Therefore, our method uses KL divergence-based weighting to alleviate this contradiction to a certain extent. Some other weighting methods focus on cluster-level similarities and then map these similarities to the instance level [16].
(3)
Differences in the low-rank representation. The existing low-rank representation-based ensemble methods all treat the original data directly as a dictionary [28,29]. Considering that good dictionaries are crucial to the learning of similarity matrices, our method uses novel low-rank representation with dictionary learning constraints.

4. Experiments

4.1. Datasets and Evaluation Methods

In this section, we present the setup and results of our extensive experiments to validate the proposed algorithm on 20 real datasets. Information about the datasets is listed in Table 2.
Although there are various metrics for evaluating clustering performance, we chose three of them, namely accuracy (ACC), normalized mutual information (NMI) and adjusted rand index (ARI), to evaluate the proposed approach because of their simplicity, popularity and robustness to changes in labeling [18,30].
ACC is the score that is obtained by matching ground truth labels. Since the labels that are assigned by clustering methods may be inconsistent with the ground truth labels, the Hungarian algorithm is generally used for label alignment when calculating ACC, which can be formulated as follows:
A C C = max f 1 n j = 1 n δ y j , f π x j ,
where y j represents the ground truth labels and δ ( y j , f ( π ( x j ) ) ) = 1 when y j = f ( π ( x j ) and δ ( y j , f ( π ( x j ) ) ) = 0 otherwise.
As a measure of mutual information entropy that indicates the clustering results and the ground truth labels [31], NMI is defined as follows:
N M I = p q n p , q log n · n p , q n p · n q p n p log n p n q n q log n q n ,
where the cluster c p in the clustering results and the cluster c q in the ground truth labels contain n p and n q instances, respectively.
ARI is an improved version of the rand index (RI) that can reflect the degree of overlap between clustering results and ground truth labels [32], which can be defined as follows:
A R I = N 2 i = 1 k j = 1 k N i , j 2 j = 1 k N i c 2 j = 1 k N j p 2 1 2 N 2 i = 1 k N i c 2 + j = 1 k N j p 2 i = 1 k N i c 2 j = 1 k N j p 2 ,
where the clustering results and the ground truth labels contain k and k clusters, respectively, N i , j is the number of common instances in cluster c i in the clustering results and cluster p j in the ground truth labels and N i c and N j p are the numbers of instances in clusters c i and p j , respectively.
The definitions of these three evaluation indicators show that the greater the indicator values, the better the method.

4.2. Experimental Settings

Each of the selected datasets contained 100 base clustering results, from which we randomly selected 20 to evaluate the ensemble clustering in each run. There were two main hyperparameters in the proposed approach, namely θ in (3) and λ in Problem (7). We used the grid search method to optimize the hyperparameters with all of the data in each dataset using the set of { 0.2 : 0.1 : 2 } for θ and { 0.01 , 0.1 , 1 , 10 , 100 , 200 , 500 } for λ . Note that these hyperparameters could also be optimized using evolutionary algorithms, as in many practical applications [33,34,35,36]. Additionally, the true number of classes in each dataset was also used as the input of the proposed approach. For each dataset, we ran the experiments 20 times and then reported the average results.

4.3. Experimental Results

We carried out a large number of repeated experiments and obtained average results, according to the optimal parameter range. We also compared our method to the following models:
  • DREC [13], which introduces microclusters to reduce the amount of data and is a dense representation-based method;
  • LWGP, LWEA [18], which both use locally weighted methods (LWGP is based on graph partitioning and LWEA is based on hierarchical clustering);
  • MCLA [37], which is a clustering ensemble method that is based on hypergraph partitioning;
  • PTA-CL [17], which introduces microclusters, explores probabilistic trajectories based on random walks and then uses complete-linkage hierarchical agglomerative clustering;
  • CESHL [11], which is a clustering ensemble method for structured hypergraph learning;
  • SPCE [10], which introduces a self-paced learning method to learn consensus results from base clusterings;
  • TRCE [27], which is a multi-graph learning clustering ensemble method that considers tri-level robustness.
Note that the proposed DLWECDL used divergence-based local weights for ensemble clustering. We also replaced the divergence-based local weights in DLWECDL with entropy-based local weights but kept the other components unchanged in another algorithm, called ELWECDL, for comparison.
The NMI values of the proposed DLWECDL method and the other selected methods are listed in Table 3, where the best and second best values are shown in bold. From this table, it can be seen that the proposed DLWECDL method achieved the best or second best result in 16 out of the 20 cases, followed by the SPCE and ELWECDL methods (best or second best in 8 out of the 20 cases). TRCE achieved the best or second best result three times, meaning it ranked fourth among the ten methods. DREC, LWGP, LWEA, PTA-CL and CESHL performed so poorly that they all only achieved the best or second best result once. MCLA did not achieve the best or second best value for any of the 20 datasets. On average, DLWECDL and ELWECDL improved the NMI values by 7.36% and 6.12%, respectively, compared to the other eight ensemble clustering models. Therefore, these results demonstrated that the proposed DLWECDL significantly outperformed the other selected ensemble clustering methods in terms of NMI.
The ARI values of the ensemble clustering methods that are shown in Table 4 offered the following findings: (1) the DLWECDL method achieved the best or second best results 17 times, meaning that it ranked first among all of the ensemble clustering methods once again; (2) the DLWECDL method was followed by the ELWECDL method, which achieved the best or second best results for nine datasets; (3) the rest of the methods only achieved the best or second best values three times or less and specially, both MCLA and TRCE failed to achieve the best or second best values for any of the datasets; (4) on average, the ARI values of DLWECDL and ELWECDL improved by 15.11% and 12.49%, respectively, compared to the other models. These findings confirmed that the proposed DLWECDL method was superior to the other selected methods in terms of ARI.
We further ran DREC, ELWECDL and DLWECDL on eight datasets (Wine, Caltech20, Caltech101, Control, FCT, ISOLET, LS and SPF). Each ensemble clustering method was run 20 times on each dataset and the accuracy values are plotted in Figure 5. We found that the ELWECDL and DLWECDL methods achieved much higher accuracy than the DREC method in almost all cases. Meanwhile, the DLWECDL method was advantageous over the ELWECDL method in most cases, which indicated that the divergence-based local weights were better than the entropy-based local weights for ensemble clustering.

4.4. Impact of Hyperparameters

For the proposed ensemble clustering algorithm, there are two main hyperparameters, i.e., λ in Problem (7) and θ in ECI. According to our extensive experiments, we found that λ had little effect on the final clustering results. The reason for this is the fact that low-rank structures are mainly explored using low-rank subspace clustering methods and Z * dominated Problem (7), as confirmed by Chen et al. [25]. For the weight parameter θ , we found that it had a large influence on the final results and that the optimal value of θ was related to the random selection of base clusterings in each experiment. According to our experience, the optimal weight parameter was 0.2–2. We selected some other datasets and repeated the experiments another 50 times. The θ values that corresponded to the maximum NMI values are shown in Figure 6.
As shown in Figure 6a, in the first run of the experiment on the Zoo dataset, the corresponding optimal θ value was 1.2, which became 1.3 in the second runt. In the subsequent experiment runs, θ did not have a fixed optimal value. The other datasets showed this same trend, which indicated that the parameter θ in our method was associated with the data matrix, i.e., we could not fix the weight parameter θ , even within the same dataset. This was mainly due to the problem of base clustering set selection.

4.5. Running Time

We compared the running time of the selected algorithms on 10 datasets, as shown in Table 5. As can be seen from the table, the time efficiency of the DLWECDL algorithm was not good because many iterations were performed while looking for low-rank representation. In order to reduce the number of iterations, we could adjust the learning rate, i.e., ρ , within an appropriate range, as long as the loss function was reasonably reduced. By adjusting the ρ value, we could control the number of iterations at less than 10, thereby improving the time performance of the algorithm. As can be seen from the table, after we increased the ρ value, the running time of DLWECDL became less than that of DREC [13].

4.6. Discussion

As analyzed in Section 3.3.5, our method is different from the other selected ensemble clustering methods in several aspects. Among them, the CESHL, DREC, PTA-CL and PTGP methods are based on the same data matrix as ours, while TRCE and SPCE are based on CA matrices. DREC, PTA-CL and PTGP all introduce microclusters to reduce the amount of data, while CESHL uses all data matrices directly. The superiority of our method over these methods mainly stems from the idea of the weighting and low-rank representation methods.
The KL divergence-based weighting method measures the differences between clusters, which alleviates the problem of the significant weight differences between similar clusters in ELWEC. Currently, DREC treats all microclusters equally and fails to consider the differences between microclusters. Although PTA-CL, PTGP and CESHL consider the differences between microclusters or clusters, none of them apply low-rank representation, i.e., they offer an insufficient exploration of the underlying information within data matrices. Moreover, CESHL is limited by the scale of the data, which leads to lower time efficiency.
Clustering ensemble method based on low-rank representation, such as RSEC, NRSEC, etc., are based on CA matrices and focus on instance-level relationships. They also all use the original data directly, i.e., the CA matrices, as dictionaries, although the L 2 , 1 -norm is applied to consider the influence of noise. In general, the advantages of dictionary learning are more obvious.

5. Conclusions

In this paper, we proposed a new weighting method and a new low-rank representation method with adaptive dictionary learning. The new weighting method was able to mine more effective cluster–cluster relationships. We mapped these inter-cluster relationships into a representative microcluster matrix, i.e., we used the microcluster–cluster matrix as a new data matrix, and added new effective information on the basis of retaining the original matrix information to the greatest possible extent. Furthermore, methods based on low-rank representation with adaptive dictionary learning have been shown to be effective and we used a more reasonable L 2 , 1 -norm to enhance robustness. Our experimental results demonstrated the effectiveness of our proposed method. On average, the proposed DLWECDL improved the NMI and ARI values by 7.36% and 15.11%, respectively, compared to the other selected SOTA ensemble clustering models. However, due to the influence of the random selection of base clusterings, we could not obtain a fixed optimal weight parameter that matched all possible base clustering combinations, even within the same dataset. Through our extensive experiments, we obtained an empirical range of weight parameters. The selection of the optimal combination of base clusterings within a dataset to obtain a pre-determined optimal weight parameter is our next research direction.

Author Contributions

Conceptualization, J.X. and T.L.; formal analysis, J.W. and T.L.; investigation, J.X. and Y.N.; methodology, J.X. and T.L.; project administration, T.L.; resources, T.L.; software, J.X.; supervision, T.L.; validation, J.X.; writing––original draft preparation, J.X., J.W. and T.L.; writing––review and editing, J.X., J.W., T.L. and Y.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Education of Humanities and Social Science Project (grant no. 19YJAZH047), the Scientific Research Fund of the Sichuan Provincial Education Department (grant no. 17ZB0433) and the Key Entrusted Projects of Higher Education Teaching Reform in Heilongjiang Province (grant no. SJGZ20200067).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhou, Z.H. Machine Learning; Springer Nature: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  2. Rupp, A.A. Clustering and Classification; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  3. Omran, M.G.; Engelbrecht, A.P.; Salman, A. An overview of clustering methods. Intell. Data Anal. 2007, 11, 583–605. [Google Scholar] [CrossRef]
  4. Li, T.; Qian, Z.; Deng, W.; Zhang, D.; Lu, H.; Wang, S. Forecasting crude oil prices based on variational mode decomposition and random sparse Bayesian learning. Appl. Soft Comput. 2021, 113, 108032. [Google Scholar] [CrossRef]
  5. Saxena, A.; Prasad, M.; Gupta, A.; Bharill, N.; Patel, O.P.; Tiwari, A.; Er, M.J.; Ding, W.; Lin, C.T. A review of clustering techniques and developments. Neurocomputing 2017, 267, 664–681. [Google Scholar] [CrossRef]
  6. Mittal, M.; Goyal, L.M.; Hemanth, D.J.; Sethi, J.K. Clustering approaches for high-dimensional databases: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1300. [Google Scholar] [CrossRef]
  7. Golalipour, K.; Akbari, E.; Hamidi, S.S.; Lee, M.; Enayatifar, R. From clustering to clustering ensemble selection: A review. Eng. Appl. Artif. Intell. 2021, 104, 104388. [Google Scholar] [CrossRef]
  8. Zhang, M. Weighted clustering ensemble: A review. Pattern Recognit. 2021, 124, 108428. [Google Scholar] [CrossRef]
  9. Wu, X.; Ma, T.; Cao, J.; Tian, Y.; Alabdulkarim, A. A comparative study of clustering ensemble algorithms. Comput. Electr. Eng. 2018, 68, 603–615. [Google Scholar] [CrossRef]
  10. Zhou, P.; Du, L.; Liu, X.; Shen, Y.D.; Fan, M.; Li, X. Self-paced clustering ensemble. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1497–1511. [Google Scholar] [CrossRef]
  11. Zhou, P.; Wang, X.; Du, L.; Li, X. Clustering ensemble via structured hypergraph learning. Inf. Fusion 2022, 78, 171–179. [Google Scholar] [CrossRef]
  12. Huang, D.; Wang, C.D.; Wu, J.S.; Lai, J.H.; Kwoh, C.K. Ultra-scalable spectral clustering and ensemble clustering. IEEE Trans. Knowl. Data Eng. 2019, 32, 1212–1226. [Google Scholar] [CrossRef] [Green Version]
  13. Zhou, J.; Zheng, H.; Pan, L. Ensemble clustering based on dense representation. Neurocomputing 2019, 357, 66–76. [Google Scholar] [CrossRef]
  14. Li, F.; Qian, Y.; Wang, J.; Dang, C.; Jing, L. Clustering ensemble based on sample’s stability. Artif. Intell. 2019, 273, 37–55. [Google Scholar] [CrossRef]
  15. Jia, Y.; Tao, S.; Wang, R.; Wang, Y. Ensemble Clustering via Co-association Matrix Self-enhancement. arXiv 2022, arXiv:2205.05937. [Google Scholar]
  16. Huang, D.; Wang, C.D.; Peng, H.; Lai, J.; Kwoh, C.K. Enhanced ensemble clustering via fast propagation of cluster-wise similarities. IEEE Trans. Syst. Man Cybern. Syst. 2018, 51, 508–520. [Google Scholar] [CrossRef]
  17. Huang, D.; Lai, J.H.; Wang, C.D. Robust ensemble clustering using probability trajectories. IEEE Trans. Knowl. Data Eng. 2015, 28, 1312–1326. [Google Scholar] [CrossRef]
  18. Huang, D.; Wang, C.D.; Lai, J.H. Locally weighted ensemble clustering. IEEE Trans. Cybern. 2017, 48, 1460–1473. [Google Scholar] [CrossRef]
  19. Wang, L.; Luo, J.; Wang, H.; Li, T. Markov clustering ensemble. Knowl. Based Syst. 2022, 251, 109196. [Google Scholar] [CrossRef]
  20. Li, F.; Qian, Y.; Wang, J. GoT: A Growing Tree Model for Clustering Ensemble. In Proceedings of the AAAI Conference on Artificial Intelligence; Published by AAAI Press: Palo Alto, CA, USA, 2021; Volume 35, pp. 8349–8356. [Google Scholar]
  21. Jia, Y.; Liu, H.; Hou, J.; Zhang, Q. Clustering ensemble meets low-rank tensor approximation. In Proceedings of the AAAI Conference on Artificial Intelligence; Published by AAAI Press: Palo Alto, CA, USA, 2021; Volume 35, pp. 7970–7978. [Google Scholar]
  22. Ji, X.; Liu, S.; Yang, L.; Ye, W.; Zhao, P. Clustering ensemble based on approximate accuracy of the equivalence granularity. Appl. Soft Comput. 2022, 129, 109492. [Google Scholar] [CrossRef]
  23. Akbari, E.; Dahlan, H.M.; Ibrahim, R.; Alizadeh, H. Hierarchical cluster ensemble selection. Eng. Appl. Artif. Intell. 2015, 39, 146–156. [Google Scholar] [CrossRef]
  24. Jia, J.; Xiao, X.; Liu, B.; Jiao, L. Bagging-based spectral clustering ensemble selection. Pattern Recognit. Lett. 2011, 32, 1456–1467. [Google Scholar] [CrossRef]
  25. Chen, J.; Mao, H.; Wang, Z.; Zhang, X. Low-rank representation with adaptive dictionary learning for subspace clustering. Knowl. Based Syst. 2021, 223, 107053. [Google Scholar] [CrossRef]
  26. Lin, Z.; Liu, R.; Su, Z. Linearized alternating direction method with adaptive penalty for low-rank representation. Adv. Neural. Inf. Process. Syst. 2011, 24, 1–9. [Google Scholar]
  27. Zhou, P.; Du, L.; Shen, Y.D.; Li, X. Tri-level robust clustering ensemble with multiple graph learning. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence; Published by AAAI Press: Palo Alto, CA, USA, 2021; pp. 11125–11133. [Google Scholar]
  28. Tao, Z.; Liu, H.; Li, S.; Ding, Z.; Fu, Y. Robust spectral ensemble clustering via rank minimization. ACM Trans. Knowl. Discov. Data (TKDD) 2019, 13, 1–25. [Google Scholar] [CrossRef]
  29. Tao, Z.; Liu, H.; Li, S.; Fu, Y. Robust spectral ensemble clustering. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, Indianapolis, IN, USA, 24–28 October 2016; pp. 367–376. [Google Scholar]
  30. Jing, L.; Tian, K.; Huang, J.Z. Stratified feature sampling method for ensemble clustering of high dimensional data. Pattern Recognit. 2015, 48, 3688–3702. [Google Scholar] [CrossRef]
  31. Shao, M.; Li, S.; Ding, Z.; Fu, Y. Deep linear coding for fast graph clustering. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015. [Google Scholar]
  32. Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
  33. Li, T.; Qian, Z.; He, T. Short-term load forecasting with improved CEEMDAN and GWO-based multiple kernel ELM. Complexity 2020, 2020. [Google Scholar] [CrossRef]
  34. Li, T.; Shi, J.; Deng, W.; Hu, Z. Pyramid particle swarm optimization with novel strategies of competition and cooperation. Appl. Soft Comput. 2022, 121, 108731. [Google Scholar] [CrossRef]
  35. Deng, W.; Ni, H.; Liu, Y.; Chen, H.; Zhao, H. An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation. Appl. Soft Comput. 2022, 127, 109419. [Google Scholar] [CrossRef]
  36. Li, T.; Shi, J.; Zhang, D. Color image encryption based on joint permutation and diffusion. J. Electron. Imaging 2021, 30, 013008. [Google Scholar] [CrossRef]
  37. Strehl, A.; Ghosh, J. Cluster ensembles—A knowledge reuse framework for combining multiple partitions. J. Mach. Learn. Res. 2002, 3, 583–617. [Google Scholar]
  38. Fern, X.Z.; Brodley, C.E. Solving cluster ensemble problems by bipartite graph partitioning. In Proceedings of the Twenty-First International Conference on Machine Learning, Banff, Alberta, Canada, 4 July 2004; p. 36. [Google Scholar]
  39. Liu, H.; Wu, J.; Liu, T.; Tao, D.; Fu, Y. Spectral ensemble clustering via weighted k-means: Theoretical and practical evidence. IEEE Trans. Knowl. Data Eng. 2017, 29, 1129–1143. [Google Scholar] [CrossRef]
Figure 1. The reconstruction of a data matrix.
Figure 1. The reconstruction of a data matrix.
Entropy 24 01324 g001
Figure 2. An example of the entropy and divergence value calculations.
Figure 2. An example of the entropy and divergence value calculations.
Entropy 24 01324 g002
Figure 3. A flowchart of the proposed DLWECDL.
Figure 3. A flowchart of the proposed DLWECDL.
Entropy 24 01324 g003
Figure 4. An example on a synthetic dataset.
Figure 4. An example on a synthetic dataset.
Entropy 24 01324 g004
Figure 5. Our comparison of the accuracy of the three methods.
Figure 5. Our comparison of the accuracy of the three methods.
Entropy 24 01324 g005
Figure 6. The weight parameter change diagrams that corresponded to the optimal results of a single repeated experiment.
Figure 6. The weight parameter change diagrams that corresponded to the optimal results of a single repeated experiment.
Entropy 24 01324 g006
Table 1. The entropy values versus the KL divergence values.
Table 1. The entropy values versus the KL divergence values.
Cluster π 1 1 π 1 2 π 1 3 π 1 4 π 2 1 π 2 2 π 2 3 π 3 1 π 3 2 π 3 3
R10.330.670.670.60.50.670.750.50.25
Entropy01.8371.8370.9182.3422.5000.9181.6233.0002.000
Divergence0−0.650−0.650−0.2880.7340.288−0.2880.1200.2480.432
Table 2. The characteristics of the datasets.
Table 2. The characteristics of the datasets.
DatasetInstancesFeaturesClassesDatasetInstancesFeaturesClasses
Zoo101167ISOLET779761726
Control600606MNIST500078410
Segment2310187ODR56206410
MnistData_05349565310Semeion159325610
Binalpha140432036SPF1941277
MnistData_10699668810Texture55004011
Caltech1018671784101VS846184
Caltech20238630,00020Wine178133
FCT3780547MF200064910
IS2310197LS6435366
Table 3. Our comparison of the proposed method to the other selected methods, according to NMI.
Table 3. Our comparison of the proposed method to the other selected methods, according to NMI.
DatsetDREC [13]LWGP [18]LWEA [18]MCLA [37]PTA-CL [17]CESHL [11]SPCE [10]TRCE [27]ELWECDLDLWECDL
VS0.14870.13200.13300.14720.10370.14440.16550.13680.15270.1592
Texture0.76930.74300.77800.72200.69630.75520.78500.76100.79420.7778
SPF0.14900.15200.15100.13500.08080.13980.21200.13300.17260.1853
Semeion0.65630.64200.65500.56030.66950.65840.62560.63870.66450.6646
ODR0.74420.81600.82900.62200.61720.82340.81930.82250.82300.8282
ISOLET0.71680.74300.74500.67980.70180.74910.73580.75020.74750.7545
MNIST0.61210.63500.64600.51410.61020.62520.60060.63090.67620.6740
FCT0.23200.20000.23100.17300.24520.20150.27200.19800.25930.2574
MF0.65530.68200.65900.61700.62900.65760.67370.65000.69330.6886
LS0.62570.64400.61600.55000.59500.64120.56600.66200.66990.6425
Control0.72150.68400.68500.71810.59630.67890.73070.70540.71660.7526
Wine0.75230.76070.7630N/AN/A0.76530.76450.76880.76790.7682
IS0.64330.62900.62100.63670.62250.62880.59040.61520.65970.6682
Binalpha0.58880.55020.55570.58240.56510.54390.60680.59530.59630.6068
Caltech1010.54070.5327N/A0.52210.5359N/AN/AN/A0.54860.5559
Caltech200.42040.43000.45200.38440.41810.43450.46000.45900.44900.4630
Mnist_DATA_050.50590.50650.49750.46990.49870.49970.50390.50100.50170.5062
Mnist_DATA_100.50160.48170.46370.48760.50040.49630.48210.49880.49240.5020
ZOO0.83120.84680.80360.78600.77730.88690.89810.87040.86350.8652
Segment0.59670.58890.59900.59440.58940.60610.59930.60960.60660.6188
Table 4. Our comparison of the proposed method to the other selected methods, according to ARI.
Table 4. Our comparison of the proposed method to the other selected methods, according to ARI.
DatsetDREC [13]LWGP [18]LWEA [18]MCLA [37]PTA-CL [17]CESHL [11]SPCE [10]TRCE [27]ELWECDLDLWECDL
VS0.12480.09700.11600.11890.07750.12350.10040.11270.11340.1249
Texture0.62190.62000.68900.59700.57740.64000.57800.62100.71160.6807
SPF0.11100.08300.08400.08740.04490.06590.08800.05900.10980.1191
Semeion0.54680.52000.53900.42500.56250.53770.47420.50130.54270.5488
ODR0.76750.76300.78200.64950.66470.76590.78330.76770.77890.7832
ISOLET0.47810.51800.55500.44380.46750.53600.47880.52250.53960.5641
MNIST0.48280.51200.55000.38000.51450.48940.46760.48790.58840.5814
FCT0.12360.11700.12900.09330.15480.12420.11300.09500.17540.1769
MF0.52840.56200.52500.5430N/A0.52170.53460.52100.58040.5707
LS0.54630.58000.56800.49600.45200.58190.47500.58800.69130.6448
Control0.59050.54150.54800.58470.47820.55800.59630.56750.58840.6328
Wine0.75770.77600.7740N/AN/A0.77100.77560.77530.77530.7760
IS0.53700.52900.52200.53050.51650.53480.48030.49960.56700.5680
Binalpha0.29880.30000.28900.29400.28070.26070.28160.29760.31360.3227
Caltech1010.28230.2447N/A0.25510.3054N/AN/AN/A0.30440.3332
Caltech200.30980.26700.35200.27300.30460.36720.31700.23700.33860.3719
Mnist_DATA_050.38930.37500.39070.32730.37840.39480.38320.36900.39230.3880
Mnist_DATA_100.40140.37060.38830.38000.41360.39320.37780.38760.38940.3977
ZOO0.82030.79350.70540.67150.67160.92530.94730.87900.86170.8840
Segment0.49280.46190.49190.48810.43900.49940.49670.49190.50480.5154
Table 5. Our comparison of the time performance for smaller ρ values.
Table 5. Our comparison of the time performance for smaller ρ values.
HGBF [38]SEC [39]PTGP [17]PTA-AL [17]MCLA [37]LWGP [18]DREC [13]DLWECDL
( ρ = 2.3 )
DLWECDL
( ρ = 3.5 )
Caltech201.26488.29520.16280.17690.8890.864511.415328.523216.6796
FCT0.335423.44570.05730.02110.82950.135623.835936.718623.3484
IS0.12185.68170.03460.0120.65110.10620.98475.83941.5568
ISOLET0.9308151.98210.10630.03731.08640.141982.7747123.275659.5029
MNIST0.320241.26780.10680.02310.85550.059853.722129.937374.0810
ODR0.315656.93950.05760.01750.85760.054951.694451.008326.0801
SPF0.05763.50590.02850.00720.69180.03540.55981.18960.6515
Semeion0.06072.59350.03840.01160.68490.04293.26734.23072.6795
Texture0.221463.53310.05140.01580.78440.07038.353528.905220.1014
VS0.05220.45910.02420.0060.66470.02280.83661.22480.7616
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, J.; Wu, J.; Li, T.; Nan, Y. Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm. Entropy 2022, 24, 1324. https://doi.org/10.3390/e24101324

AMA Style

Xu J, Wu J, Li T, Nan Y. Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm. Entropy. 2022; 24(10):1324. https://doi.org/10.3390/e24101324

Chicago/Turabian Style

Xu, Jiaxuan, Jiang Wu, Taiyong Li, and Yang Nan. 2022. "Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm" Entropy 24, no. 10: 1324. https://doi.org/10.3390/e24101324

APA Style

Xu, J., Wu, J., Li, T., & Nan, Y. (2022). Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm. Entropy, 24(10), 1324. https://doi.org/10.3390/e24101324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop