Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm
Abstract
:1. Introduction
- (1)
- The proposal of a Kullback–Leibler divergence-based weighted method to better reveal relationships between clusters;
- (2)
- The use of low-rank representation instead of dense representation to better explore hidden effective information and low-rank structures of original matrices;
- (3)
- The application of the -norm to noise to improve robustness;
- (4)
- The introduction of adaptive dictionary learning to better learn low-rank structures;
- (5)
- Extensive experiments to demonstrate that the proposed DLWECDL can significantly outperform other state-of-the-art approaches.
2. Related Works
2.1. Ensemble Clustering
2.2. Microcluster Representatives
2.3. Information Entropy-Based Locally Weighted Method
2.4. Dense Representation Ensemble Clustering
3. Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning (DLWECDL)
3.1. Divergence-Based Locally Weighted Method
3.2. -Norm Subspace Clustering of Adaptive Dictionaries
Algorithm 1:Divergence-based locally weighted ensemble clustering with dictionary learning (DLWECDL). |
Input: M base clustering, . Output: Consensus clustering result S 1. Reconstruct the data matrix to obtain a microcluster representative matrix. 2. Calculate the local divergence weight and local entropy weight and weigh the microcluster representative matrix. 3. Learn low-rank structures Z by Low-rank representation with adaptive dictionary learning and the -norm. 4. Calculate H by SVD decomposition of Z. Calculate similarity matrix W by H. , ), = 5. Perform Ncut to partition the similarity matrix W. 6. Obtain consensus result S by microcluster representative label mapping. |
3.3. Optimization Method
3.3.1. Subproblem J
3.3.2. Subproblem Z
3.3.3. Subproblem E
3.3.4. Subproblem P
Algorithm 2:Optimization algorithm for DLWECDL. |
3.3.5. Differences between Our Approach and Other Ensemble Clustering Methods
- (1)
- Differences in the data matrix. Some methods perform ensemble algorithms based on co-association (CA) matrices [10,27], but CA matrices focus on instance-level relationships and ignore the relationships between clusters. Our method is based on instance–cluster data matrices, although the DREC [13], PTA-CL [17] and CESHL [11] methods also use data matrices that are similar to ours. Among these methods, CESHL does not introduce microclusters and its time efficiency is low. DREC fails to consider the differences between microclusters. Our method makes up for these shortcomings. It is worth pointing out that although the PTA-CL method considers the differences between microclusters, it does not explore their deep structures.
- (2)
- Differences in the weighting methods. The LWEC method is based on the entropy-based weighting method [18]. As shown in Section 3.1, the entropy-based weighted method cannot solve the problem of consistent weights among the similar clusters. Therefore, our method uses KL divergence-based weighting to alleviate this contradiction to a certain extent. Some other weighting methods focus on cluster-level similarities and then map these similarities to the instance level [16].
- (3)
- Differences in the low-rank representation. The existing low-rank representation-based ensemble methods all treat the original data directly as a dictionary [28,29]. Considering that good dictionaries are crucial to the learning of similarity matrices, our method uses novel low-rank representation with dictionary learning constraints.
4. Experiments
4.1. Datasets and Evaluation Methods
4.2. Experimental Settings
4.3. Experimental Results
- DREC [13], which introduces microclusters to reduce the amount of data and is a dense representation-based method;
- LWGP, LWEA [18], which both use locally weighted methods (LWGP is based on graph partitioning and LWEA is based on hierarchical clustering);
- MCLA [37], which is a clustering ensemble method that is based on hypergraph partitioning;
- PTA-CL [17], which introduces microclusters, explores probabilistic trajectories based on random walks and then uses complete-linkage hierarchical agglomerative clustering;
- CESHL [11], which is a clustering ensemble method for structured hypergraph learning;
- SPCE [10], which introduces a self-paced learning method to learn consensus results from base clusterings;
- TRCE [27], which is a multi-graph learning clustering ensemble method that considers tri-level robustness.
4.4. Impact of Hyperparameters
4.5. Running Time
4.6. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhou, Z.H. Machine Learning; Springer Nature: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
- Rupp, A.A. Clustering and Classification; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
- Omran, M.G.; Engelbrecht, A.P.; Salman, A. An overview of clustering methods. Intell. Data Anal. 2007, 11, 583–605. [Google Scholar] [CrossRef]
- Li, T.; Qian, Z.; Deng, W.; Zhang, D.; Lu, H.; Wang, S. Forecasting crude oil prices based on variational mode decomposition and random sparse Bayesian learning. Appl. Soft Comput. 2021, 113, 108032. [Google Scholar] [CrossRef]
- Saxena, A.; Prasad, M.; Gupta, A.; Bharill, N.; Patel, O.P.; Tiwari, A.; Er, M.J.; Ding, W.; Lin, C.T. A review of clustering techniques and developments. Neurocomputing 2017, 267, 664–681. [Google Scholar] [CrossRef]
- Mittal, M.; Goyal, L.M.; Hemanth, D.J.; Sethi, J.K. Clustering approaches for high-dimensional databases: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1300. [Google Scholar] [CrossRef]
- Golalipour, K.; Akbari, E.; Hamidi, S.S.; Lee, M.; Enayatifar, R. From clustering to clustering ensemble selection: A review. Eng. Appl. Artif. Intell. 2021, 104, 104388. [Google Scholar] [CrossRef]
- Zhang, M. Weighted clustering ensemble: A review. Pattern Recognit. 2021, 124, 108428. [Google Scholar] [CrossRef]
- Wu, X.; Ma, T.; Cao, J.; Tian, Y.; Alabdulkarim, A. A comparative study of clustering ensemble algorithms. Comput. Electr. Eng. 2018, 68, 603–615. [Google Scholar] [CrossRef]
- Zhou, P.; Du, L.; Liu, X.; Shen, Y.D.; Fan, M.; Li, X. Self-paced clustering ensemble. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1497–1511. [Google Scholar] [CrossRef]
- Zhou, P.; Wang, X.; Du, L.; Li, X. Clustering ensemble via structured hypergraph learning. Inf. Fusion 2022, 78, 171–179. [Google Scholar] [CrossRef]
- Huang, D.; Wang, C.D.; Wu, J.S.; Lai, J.H.; Kwoh, C.K. Ultra-scalable spectral clustering and ensemble clustering. IEEE Trans. Knowl. Data Eng. 2019, 32, 1212–1226. [Google Scholar] [CrossRef] [Green Version]
- Zhou, J.; Zheng, H.; Pan, L. Ensemble clustering based on dense representation. Neurocomputing 2019, 357, 66–76. [Google Scholar] [CrossRef]
- Li, F.; Qian, Y.; Wang, J.; Dang, C.; Jing, L. Clustering ensemble based on sample’s stability. Artif. Intell. 2019, 273, 37–55. [Google Scholar] [CrossRef]
- Jia, Y.; Tao, S.; Wang, R.; Wang, Y. Ensemble Clustering via Co-association Matrix Self-enhancement. arXiv 2022, arXiv:2205.05937. [Google Scholar]
- Huang, D.; Wang, C.D.; Peng, H.; Lai, J.; Kwoh, C.K. Enhanced ensemble clustering via fast propagation of cluster-wise similarities. IEEE Trans. Syst. Man Cybern. Syst. 2018, 51, 508–520. [Google Scholar] [CrossRef]
- Huang, D.; Lai, J.H.; Wang, C.D. Robust ensemble clustering using probability trajectories. IEEE Trans. Knowl. Data Eng. 2015, 28, 1312–1326. [Google Scholar] [CrossRef]
- Huang, D.; Wang, C.D.; Lai, J.H. Locally weighted ensemble clustering. IEEE Trans. Cybern. 2017, 48, 1460–1473. [Google Scholar] [CrossRef]
- Wang, L.; Luo, J.; Wang, H.; Li, T. Markov clustering ensemble. Knowl. Based Syst. 2022, 251, 109196. [Google Scholar] [CrossRef]
- Li, F.; Qian, Y.; Wang, J. GoT: A Growing Tree Model for Clustering Ensemble. In Proceedings of the AAAI Conference on Artificial Intelligence; Published by AAAI Press: Palo Alto, CA, USA, 2021; Volume 35, pp. 8349–8356. [Google Scholar]
- Jia, Y.; Liu, H.; Hou, J.; Zhang, Q. Clustering ensemble meets low-rank tensor approximation. In Proceedings of the AAAI Conference on Artificial Intelligence; Published by AAAI Press: Palo Alto, CA, USA, 2021; Volume 35, pp. 7970–7978. [Google Scholar]
- Ji, X.; Liu, S.; Yang, L.; Ye, W.; Zhao, P. Clustering ensemble based on approximate accuracy of the equivalence granularity. Appl. Soft Comput. 2022, 129, 109492. [Google Scholar] [CrossRef]
- Akbari, E.; Dahlan, H.M.; Ibrahim, R.; Alizadeh, H. Hierarchical cluster ensemble selection. Eng. Appl. Artif. Intell. 2015, 39, 146–156. [Google Scholar] [CrossRef]
- Jia, J.; Xiao, X.; Liu, B.; Jiao, L. Bagging-based spectral clustering ensemble selection. Pattern Recognit. Lett. 2011, 32, 1456–1467. [Google Scholar] [CrossRef]
- Chen, J.; Mao, H.; Wang, Z.; Zhang, X. Low-rank representation with adaptive dictionary learning for subspace clustering. Knowl. Based Syst. 2021, 223, 107053. [Google Scholar] [CrossRef]
- Lin, Z.; Liu, R.; Su, Z. Linearized alternating direction method with adaptive penalty for low-rank representation. Adv. Neural. Inf. Process. Syst. 2011, 24, 1–9. [Google Scholar]
- Zhou, P.; Du, L.; Shen, Y.D.; Li, X. Tri-level robust clustering ensemble with multiple graph learning. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence; Published by AAAI Press: Palo Alto, CA, USA, 2021; pp. 11125–11133. [Google Scholar]
- Tao, Z.; Liu, H.; Li, S.; Ding, Z.; Fu, Y. Robust spectral ensemble clustering via rank minimization. ACM Trans. Knowl. Discov. Data (TKDD) 2019, 13, 1–25. [Google Scholar] [CrossRef]
- Tao, Z.; Liu, H.; Li, S.; Fu, Y. Robust spectral ensemble clustering. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, Indianapolis, IN, USA, 24–28 October 2016; pp. 367–376. [Google Scholar]
- Jing, L.; Tian, K.; Huang, J.Z. Stratified feature sampling method for ensemble clustering of high dimensional data. Pattern Recognit. 2015, 48, 3688–3702. [Google Scholar] [CrossRef]
- Shao, M.; Li, S.; Ding, Z.; Fu, Y. Deep linear coding for fast graph clustering. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015. [Google Scholar]
- Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
- Li, T.; Qian, Z.; He, T. Short-term load forecasting with improved CEEMDAN and GWO-based multiple kernel ELM. Complexity 2020, 2020. [Google Scholar] [CrossRef]
- Li, T.; Shi, J.; Deng, W.; Hu, Z. Pyramid particle swarm optimization with novel strategies of competition and cooperation. Appl. Soft Comput. 2022, 121, 108731. [Google Scholar] [CrossRef]
- Deng, W.; Ni, H.; Liu, Y.; Chen, H.; Zhao, H. An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation. Appl. Soft Comput. 2022, 127, 109419. [Google Scholar] [CrossRef]
- Li, T.; Shi, J.; Zhang, D. Color image encryption based on joint permutation and diffusion. J. Electron. Imaging 2021, 30, 013008. [Google Scholar] [CrossRef]
- Strehl, A.; Ghosh, J. Cluster ensembles—A knowledge reuse framework for combining multiple partitions. J. Mach. Learn. Res. 2002, 3, 583–617. [Google Scholar]
- Fern, X.Z.; Brodley, C.E. Solving cluster ensemble problems by bipartite graph partitioning. In Proceedings of the Twenty-First International Conference on Machine Learning, Banff, Alberta, Canada, 4 July 2004; p. 36. [Google Scholar]
- Liu, H.; Wu, J.; Liu, T.; Tao, D.; Fu, Y. Spectral ensemble clustering via weighted k-means: Theoretical and practical evidence. IEEE Trans. Knowl. Data Eng. 2017, 29, 1129–1143. [Google Scholar] [CrossRef]
Cluster | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
R | 1 | 0.33 | 0.67 | 0.67 | 0.6 | 0.5 | 0.67 | 0.75 | 0.5 | 0.25 |
Entropy | 0 | 1.837 | 1.837 | 0.918 | 2.342 | 2.500 | 0.918 | 1.623 | 3.000 | 2.000 |
Divergence | 0 | −0.650 | −0.650 | −0.288 | 0.734 | 0.288 | −0.288 | 0.120 | 0.248 | 0.432 |
Dataset | Instances | Features | Classes | Dataset | Instances | Features | Classes |
---|---|---|---|---|---|---|---|
Zoo | 101 | 16 | 7 | ISOLET | 7797 | 617 | 26 |
Control | 600 | 60 | 6 | MNIST | 5000 | 784 | 10 |
Segment | 2310 | 18 | 7 | ODR | 5620 | 64 | 10 |
MnistData_05 | 3495 | 653 | 10 | Semeion | 1593 | 256 | 10 |
Binalpha | 1404 | 320 | 36 | SPF | 1941 | 27 | 7 |
MnistData_10 | 6996 | 688 | 10 | Texture | 5500 | 40 | 11 |
Caltech101 | 8671 | 784 | 101 | VS | 846 | 18 | 4 |
Caltech20 | 2386 | 30,000 | 20 | Wine | 178 | 13 | 3 |
FCT | 3780 | 54 | 7 | MF | 2000 | 649 | 10 |
IS | 2310 | 19 | 7 | LS | 6435 | 36 | 6 |
Datset | DREC [13] | LWGP [18] | LWEA [18] | MCLA [37] | PTA-CL [17] | CESHL [11] | SPCE [10] | TRCE [27] | ELWECDL | DLWECDL |
---|---|---|---|---|---|---|---|---|---|---|
VS | 0.1487 | 0.1320 | 0.1330 | 0.1472 | 0.1037 | 0.1444 | 0.1655 | 0.1368 | 0.1527 | 0.1592 |
Texture | 0.7693 | 0.7430 | 0.7780 | 0.7220 | 0.6963 | 0.7552 | 0.7850 | 0.7610 | 0.7942 | 0.7778 |
SPF | 0.1490 | 0.1520 | 0.1510 | 0.1350 | 0.0808 | 0.1398 | 0.2120 | 0.1330 | 0.1726 | 0.1853 |
Semeion | 0.6563 | 0.6420 | 0.6550 | 0.5603 | 0.6695 | 0.6584 | 0.6256 | 0.6387 | 0.6645 | 0.6646 |
ODR | 0.7442 | 0.8160 | 0.8290 | 0.6220 | 0.6172 | 0.8234 | 0.8193 | 0.8225 | 0.8230 | 0.8282 |
ISOLET | 0.7168 | 0.7430 | 0.7450 | 0.6798 | 0.7018 | 0.7491 | 0.7358 | 0.7502 | 0.7475 | 0.7545 |
MNIST | 0.6121 | 0.6350 | 0.6460 | 0.5141 | 0.6102 | 0.6252 | 0.6006 | 0.6309 | 0.6762 | 0.6740 |
FCT | 0.2320 | 0.2000 | 0.2310 | 0.1730 | 0.2452 | 0.2015 | 0.2720 | 0.1980 | 0.2593 | 0.2574 |
MF | 0.6553 | 0.6820 | 0.6590 | 0.6170 | 0.6290 | 0.6576 | 0.6737 | 0.6500 | 0.6933 | 0.6886 |
LS | 0.6257 | 0.6440 | 0.6160 | 0.5500 | 0.5950 | 0.6412 | 0.5660 | 0.6620 | 0.6699 | 0.6425 |
Control | 0.7215 | 0.6840 | 0.6850 | 0.7181 | 0.5963 | 0.6789 | 0.7307 | 0.7054 | 0.7166 | 0.7526 |
Wine | 0.7523 | 0.7607 | 0.7630 | N/A | N/A | 0.7653 | 0.7645 | 0.7688 | 0.7679 | 0.7682 |
IS | 0.6433 | 0.6290 | 0.6210 | 0.6367 | 0.6225 | 0.6288 | 0.5904 | 0.6152 | 0.6597 | 0.6682 |
Binalpha | 0.5888 | 0.5502 | 0.5557 | 0.5824 | 0.5651 | 0.5439 | 0.6068 | 0.5953 | 0.5963 | 0.6068 |
Caltech101 | 0.5407 | 0.5327 | N/A | 0.5221 | 0.5359 | N/A | N/A | N/A | 0.5486 | 0.5559 |
Caltech20 | 0.4204 | 0.4300 | 0.4520 | 0.3844 | 0.4181 | 0.4345 | 0.4600 | 0.4590 | 0.4490 | 0.4630 |
Mnist_DATA_05 | 0.5059 | 0.5065 | 0.4975 | 0.4699 | 0.4987 | 0.4997 | 0.5039 | 0.5010 | 0.5017 | 0.5062 |
Mnist_DATA_10 | 0.5016 | 0.4817 | 0.4637 | 0.4876 | 0.5004 | 0.4963 | 0.4821 | 0.4988 | 0.4924 | 0.5020 |
ZOO | 0.8312 | 0.8468 | 0.8036 | 0.7860 | 0.7773 | 0.8869 | 0.8981 | 0.8704 | 0.8635 | 0.8652 |
Segment | 0.5967 | 0.5889 | 0.5990 | 0.5944 | 0.5894 | 0.6061 | 0.5993 | 0.6096 | 0.6066 | 0.6188 |
Datset | DREC [13] | LWGP [18] | LWEA [18] | MCLA [37] | PTA-CL [17] | CESHL [11] | SPCE [10] | TRCE [27] | ELWECDL | DLWECDL |
---|---|---|---|---|---|---|---|---|---|---|
VS | 0.1248 | 0.0970 | 0.1160 | 0.1189 | 0.0775 | 0.1235 | 0.1004 | 0.1127 | 0.1134 | 0.1249 |
Texture | 0.6219 | 0.6200 | 0.6890 | 0.5970 | 0.5774 | 0.6400 | 0.5780 | 0.6210 | 0.7116 | 0.6807 |
SPF | 0.1110 | 0.0830 | 0.0840 | 0.0874 | 0.0449 | 0.0659 | 0.0880 | 0.0590 | 0.1098 | 0.1191 |
Semeion | 0.5468 | 0.5200 | 0.5390 | 0.4250 | 0.5625 | 0.5377 | 0.4742 | 0.5013 | 0.5427 | 0.5488 |
ODR | 0.7675 | 0.7630 | 0.7820 | 0.6495 | 0.6647 | 0.7659 | 0.7833 | 0.7677 | 0.7789 | 0.7832 |
ISOLET | 0.4781 | 0.5180 | 0.5550 | 0.4438 | 0.4675 | 0.5360 | 0.4788 | 0.5225 | 0.5396 | 0.5641 |
MNIST | 0.4828 | 0.5120 | 0.5500 | 0.3800 | 0.5145 | 0.4894 | 0.4676 | 0.4879 | 0.5884 | 0.5814 |
FCT | 0.1236 | 0.1170 | 0.1290 | 0.0933 | 0.1548 | 0.1242 | 0.1130 | 0.0950 | 0.1754 | 0.1769 |
MF | 0.5284 | 0.5620 | 0.5250 | 0.5430 | N/A | 0.5217 | 0.5346 | 0.5210 | 0.5804 | 0.5707 |
LS | 0.5463 | 0.5800 | 0.5680 | 0.4960 | 0.4520 | 0.5819 | 0.4750 | 0.5880 | 0.6913 | 0.6448 |
Control | 0.5905 | 0.5415 | 0.5480 | 0.5847 | 0.4782 | 0.5580 | 0.5963 | 0.5675 | 0.5884 | 0.6328 |
Wine | 0.7577 | 0.7760 | 0.7740 | N/A | N/A | 0.7710 | 0.7756 | 0.7753 | 0.7753 | 0.7760 |
IS | 0.5370 | 0.5290 | 0.5220 | 0.5305 | 0.5165 | 0.5348 | 0.4803 | 0.4996 | 0.5670 | 0.5680 |
Binalpha | 0.2988 | 0.3000 | 0.2890 | 0.2940 | 0.2807 | 0.2607 | 0.2816 | 0.2976 | 0.3136 | 0.3227 |
Caltech101 | 0.2823 | 0.2447 | N/A | 0.2551 | 0.3054 | N/A | N/A | N/A | 0.3044 | 0.3332 |
Caltech20 | 0.3098 | 0.2670 | 0.3520 | 0.2730 | 0.3046 | 0.3672 | 0.3170 | 0.2370 | 0.3386 | 0.3719 |
Mnist_DATA_05 | 0.3893 | 0.3750 | 0.3907 | 0.3273 | 0.3784 | 0.3948 | 0.3832 | 0.3690 | 0.3923 | 0.3880 |
Mnist_DATA_10 | 0.4014 | 0.3706 | 0.3883 | 0.3800 | 0.4136 | 0.3932 | 0.3778 | 0.3876 | 0.3894 | 0.3977 |
ZOO | 0.8203 | 0.7935 | 0.7054 | 0.6715 | 0.6716 | 0.9253 | 0.9473 | 0.8790 | 0.8617 | 0.8840 |
Segment | 0.4928 | 0.4619 | 0.4919 | 0.4881 | 0.4390 | 0.4994 | 0.4967 | 0.4919 | 0.5048 | 0.5154 |
HGBF [38] | SEC [39] | PTGP [17] | PTA-AL [17] | MCLA [37] | LWGP [18] | DREC [13] | DLWECDL () | DLWECDL () | |
---|---|---|---|---|---|---|---|---|---|
Caltech20 | 1.2648 | 8.2952 | 0.1628 | 0.1769 | 0.889 | 0.8645 | 11.4153 | 28.5232 | 16.6796 |
FCT | 0.3354 | 23.4457 | 0.0573 | 0.0211 | 0.8295 | 0.1356 | 23.8359 | 36.7186 | 23.3484 |
IS | 0.1218 | 5.6817 | 0.0346 | 0.012 | 0.6511 | 0.1062 | 0.9847 | 5.8394 | 1.5568 |
ISOLET | 0.9308 | 151.9821 | 0.1063 | 0.0373 | 1.0864 | 0.1419 | 82.7747 | 123.2756 | 59.5029 |
MNIST | 0.3202 | 41.2678 | 0.1068 | 0.0231 | 0.8555 | 0.0598 | 53.722 | 129.9373 | 74.0810 |
ODR | 0.3156 | 56.9395 | 0.0576 | 0.0175 | 0.8576 | 0.0549 | 51.6944 | 51.0083 | 26.0801 |
SPF | 0.0576 | 3.5059 | 0.0285 | 0.0072 | 0.6918 | 0.0354 | 0.5598 | 1.1896 | 0.6515 |
Semeion | 0.0607 | 2.5935 | 0.0384 | 0.0116 | 0.6849 | 0.0429 | 3.2673 | 4.2307 | 2.6795 |
Texture | 0.2214 | 63.5331 | 0.0514 | 0.0158 | 0.7844 | 0.0703 | 8.3535 | 28.9052 | 20.1014 |
VS | 0.0522 | 0.4591 | 0.0242 | 0.006 | 0.6647 | 0.0228 | 0.8366 | 1.2248 | 0.7616 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, J.; Wu, J.; Li, T.; Nan, Y. Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm. Entropy 2022, 24, 1324. https://doi.org/10.3390/e24101324
Xu J, Wu J, Li T, Nan Y. Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm. Entropy. 2022; 24(10):1324. https://doi.org/10.3390/e24101324
Chicago/Turabian StyleXu, Jiaxuan, Jiang Wu, Taiyong Li, and Yang Nan. 2022. "Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm" Entropy 24, no. 10: 1324. https://doi.org/10.3390/e24101324
APA StyleXu, J., Wu, J., Li, T., & Nan, Y. (2022). Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm. Entropy, 24(10), 1324. https://doi.org/10.3390/e24101324