Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection
Abstract
:1. Introduction
2. Preliminaries
2.1. Neighborhood Relation
2.2. Neighborhood Rough Set and Neighborhood Classifier
Algorithm 1 Neighborhood Classifier (NEC) | |
Inputs: Decision system , a testing sample , radius ; | |
Outputs: Predicted label of y: Pre. | |
1. | , compute ; |
2. | Compute by Equation (5), and then obtain by Equation (6); |
// Note that in NEC, ; | |
3. | IND, compute the probability Pr; |
4. | PrIND; |
5. | Find the corresponding label in terms of and assign it to Pre; |
6. | Return Pre. |
2.3. Measures
3. Multiple-Criteria Reduct with Sample Selection
3.1. Attribute Reduction
- 1.
- A is the approximation quality reduct if and only if and , ;
- 2.
- A is the conditional entropy reduct if and only if and , .
Algorithm 2 Approximation Quality Reduct (AQR) | |
Inputs: Decision system , radius . | |
Outputs: An approximation quality reduct A. | |
1. | Compute ; |
2. | ; |
3. | Do |
| |
Until; | |
4. | Return A. |
3.2. Limitations of Single Measure
3.3. Multiple-Criteria Reduct
- 1.
- and ;
- 2.
- , or .
Algorithm 3 Multiple-Criteria Reduct (MCR) | |
Inputs: Decision system , radius . | |
Outputs: A multiple criteria reduct A. | |
1. | Compute and ENT; |
2. | ; |
1. | Do |
| |
Until and ; | |
4. | Return A. |
3.4. Multiple-Criteria Reduct with Sample Selection
Algorithm 4 Multiple-Criteria Reduct with Sample Selection (MCRSS) | |
Inputs: Decision system , radius , M. | |
Outputs: A multiple criteria reduct A. | |
1. | ; |
// Initialize the universe of new decision system; | |
2. | For to M |
Execute K-means clustering algorithm over , obtain clusters ; | |
// In K-means clustering, K is the number of decision classes; | |
End For | |
13. | For to K |
Obtain the j-th average cluster centroid
| |
End for | |
4. | For to K |
, if , then ; | |
End For | |
// The new decision system is constructed; | |
5. | Compute and ENT over ; |
6. | ; |
7. | Do |
| |
Until and ; | |
8. | Return A. |
4. Experimental Analysis
4.1. Comparisons of Approximation Qualities
- If the value of increases, then the decreasing trends have been obtained for approximation qualities with respect to three different reducts, though those decreasing trends are not necessarily monotonic.
- By comparing it with AQRSS, MCRSS can preserve or slightly increase approximation qualities. This is mainly because the constraint designed by the measure of approximation quality is also considered in MCRSS. Take, for instance, the “Ionosphere” dataset; if , then the approximation qualities derived by MCRSS and AQRSS are 0.6049 and 0.4644, respectively.
- An interesting observation is that the approximation qualities obtained by CERSS may be greater than those obtained by AQRSS in some datasets. Take, for instance, the “Dermatology” dataset; if , then approximation qualities derived by MCRSS, AQRSS, and CERSS are 0.9333, 0.8606, and 0.9151, respectively. Such results tell us that AQRSS is not always good in deriving higher approximation qualities.
4.2. Comparisons of Conditional Entropies
- If the value of increases, then the increasing trends have been obtained for conditional entropies with respect to three different reducts, though those increasing trends are not strictly monotonic.
- In most cases, there are slight differences between conditional entropies generated by MCRSS and CERSS, which can be attributed to the constraint designed by the measure of conditional entropy that has also been considered in MCRSS. Take, for instance, the “Breast Tissue" dataset; if , then the conditional entropies derived by MCRSS and CERSS are 0.6127 and 0.6599, respectively.
- In most cases, the conditional entropies obtained by AQRSS are greater than those derived by both CERSS and MCRSS. This observation demonstrates that, if we only pay attention to the single measure of approximation quality, the obtained reduct may not be effective in terms of conditional entropy. Take, for instance, the “Forest-Type Mapping” dataset; if , then the conditional entropies derived by MCRSS, CERSS, and AQRSS are 0.4744, 0.5507, and 0.8951, respectively.
4.3. Comparisons of Classification Accuracies
4.4. Comparisons of Reduct Lengths
4.5. Comparisons of Time Consumptions
- The time consumption of MCRSS is higher than that of AQRSS and CERSS, though the time complexities of these three algorithms are the same. The reasons include two aspects: (1) MCRSS computes two attribute significances instead of one in each iteration; (2) the length of the reduct derived by MCRSS is frequently greater than those derived by AQRSS and CERSS, i.e., more iterations should be used.
- By comparing it with the time consumption of MCR, MCRSS time consumption was significantly reduced. From this point of view, sample selection is effective in the process of finding a reduct from the viewpoint of saving time.
4.6. Comparisons of Core Attributes
- In the comparisons of these three measures, the largest values of approximation quality (classification accuracy) are in bold, and the smallest values of conditional entropy are underlined. It should be emphasized that in Datasets 6 and 10, the values of these three measures are the same. This is mainly because the cores of the three algorithms are the same, which can be seen from Table 3.
- The results shown in Table 4 can generally stay consistent with the results shown above (Figure 1, Figure 2 and Figure 3), which are obtained from the datasets with sample selection. The reducts obtained by MCRSS can not only preserve approximation quality (Figure 1) and reduce conditional entropy (Figure 2), but also improve classification accuracy performance (Figure 3).
- We can find that conditional entropy and approximation quality both have an important role in improving performance. The measure of conditional entropy may contribute a little more in improving classification accuracy values. The phenomenon that most of the values derived from the CERSS and MCRSS are the same may illustrate that the constraint of conditional entropy is more helpful in improving classification accuracy.
5. Conclusions and Future Perspectives
Author Contributions
Funding
Conflicts of Interest
References
- Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991; ISBN 978-0792314721. [Google Scholar]
- Pawlak, Z.; Skowron, A. Rough sets: Some extensions. Inf. Sci. 2007, 177, 28–40. [Google Scholar] [CrossRef] [Green Version]
- Chen, H.M.; Li, T.R.; Luo, C.; Horng, S.J.; Wang, G. A decision-theoretic rough set approach for dynamic data mining. IEEE Trans. Fuzzy Syst. 2015, 23, 1–14. [Google Scholar] [CrossRef]
- Kaneiwa, K.; Kudo, Y. A sequential pattern mining algorithm using rough set theory. Int. J. Approx. Reason. 2011, 52, 881–893. [Google Scholar] [CrossRef]
- Hu, Q.H.; Yu, D.R.; Xie, Z.X.; Li, X. EROS: Ensemble rough subspaces. Pattern Recognit. 2007, 40, 3728–3739. [Google Scholar] [CrossRef]
- Dowlatshahi, M.B.; Derhami, V.; Nezamabadi, P.H. Ensemble of filter-based rankers to guide an epsilon-greedy swarm optimizer for high-dimensional feature subset selection. Information 2017, 8, 152. [Google Scholar] [CrossRef]
- Yao, Y.Y.; Zhao, Y. Attribute reduction in decision-theoretic rough set models. Inf. Sci. 2008, 178, 3356–3373. [Google Scholar] [CrossRef] [Green Version]
- Hu, Q.H.; Yu, D.R.; Xie, Z.X. Neighborhood classifiers. Expert Syst. Appl. 2008, 34, 866–876. [Google Scholar] [CrossRef]
- Dai, J.H.; Wang, W.T.; Xu, Q.; Tian, H. Uncertainty measurement for interval-valued decision systems based on extended conditional entropy. Knowl.-Based Syst. 2012, 27, 443–450. [Google Scholar] [CrossRef]
- Dai, J.H.; Xu, Q.; Wang, W.T.; Tian, H. Conditional entropy for incomplete decision systems and its application in data mining. Int. J. Gen. Syst. 2012, 41, 713–728. [Google Scholar] [CrossRef]
- Dai, J.H.; Wang, W.T.; Tian, H.W.; Liu, L. Attribute selection based on a new conditional entropy for incomplete decision systems. Knowl.-Based Syst. 2013, 39, 207–213. [Google Scholar] [CrossRef]
- Wang, C.Z.; Hu, Q.H.; Wang, X.Z.; Chen, D.; Qian, Y.; Dong, Z. Feature selection based on neighborhood discrimination index. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2986–2999. [Google Scholar] [CrossRef] [PubMed]
- Angiulli, F. Fast nearest neighbor condensation for large data sets classification. IEEE Trans. Knowl. Data Eng. 2007, 19, 1450–1464. [Google Scholar] [CrossRef]
- Li, Y.H.; Maguire, L. Selecting critical patterns based on local geometrical and statistical information. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1189–1201. [Google Scholar] [CrossRef] [PubMed]
- Nicolia, G.P.; Javier, P.R.; De, H.G.A. Oligois: Scalable instance selection for class-imbalanced data sets. IEEE Trans. Cybern. 2013, 43, 332–346. [Google Scholar] [CrossRef]
- Lin, W.C.; Tsai, C.F.; Ke, S.W.; Hung, C.W. Learning to detect representative data for large scale instance selection. J. Syst. Softw. 2015, 106, 1–8. [Google Scholar] [CrossRef]
- Zhai, J.H.; Wang, X.Z.; Pang, X.H. Voting-based instance selection from large data sets with mapreduce and random weight networks. Inf. Sci. 2016, 23, 1066–1077. [Google Scholar] [CrossRef]
- Zhai, J.H.; Li, T.; Wang, X.Z. A cross-selection instance algorithm. J. Intell. Fuzzy Syst. 2016, 3, 717–728. [Google Scholar] [CrossRef]
- Zhang, X.; Mei, C.L.; Chen, D.G.; Li, J. Feature selection in mixed data: a method using a novel fuzzy rough set-based information entropy. Pattern Recognit. 2016, 56, 1–15. [Google Scholar] [CrossRef]
- Xu, S.P.; Yang, X.B.; Yu, H.L.; Yang, J.; Tsang, E.C. Multi-label learning with label-specific feature reduction. Knowl.-Based Syst. 2016, 104, 52–61. [Google Scholar] [CrossRef]
- Yang, X.B.; Yao, Y.Y. Ensemble selector for attribute reduction. Appl. Soft Comput. 2018, 70, 1–11. [Google Scholar] [CrossRef]
- Ju, H.R.; Yang, X.B.; Song, X.N.; Qi, Y. Dynamic updating multigranulation fuzzy rough set: Approximations and reducts. Int. J. Mach. Learn. Cybern. 2014, 5, 981–990. [Google Scholar] [CrossRef]
- Yang, X.B.; Yu, D.J.; Yang, J.Y.; Wei, L. Dominance-based rough set approach to incomplete interval-valued information system. Data Knowl. Eng. 2009, 68, 1331–1347. [Google Scholar] [CrossRef]
- Yao, Y.Y. Relational interpretations of neighborhood operators and rough set approximation operators. Inf. Sci. 1998, 111, 239–259. [Google Scholar] [CrossRef] [Green Version]
- Yang, X.B.; Qian, Y.H.; Yang, J.Y. Hierarchical structures on multigranulation spaces. J. Comput. Sci. Technol. 2012, 27, 1169–1183. [Google Scholar] [CrossRef]
- Yang, X.B.; Qi, Y.S.; Song, X.N.; Yang, J. Test cost sensitive multigranulation rough set: Model and minimal cost selection. Inf. Sci. 2013, 250, 184–199. [Google Scholar] [CrossRef]
- Chen, D.G.; Wang, C.Z.; Hu, Q.H. A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets. Inf. Sci. 2007, 177, 3500–3518. [Google Scholar] [CrossRef]
- Hu, Q.H.; Pedrycz, W.; Yu, D.R.; Lang, J. Selecting discrete and continuous features based on neighborhood decision error minimization. IEEE Trans. Syst. Man Cybern. B 2010, 40, 137–150. [Google Scholar] [CrossRef]
- Zhang, X.; Mei, C.L.; Chen, D.G.; Li, J. Multi-confidence rule acquisition and confidence-preserved attribute reduction in interval-valued decision systems. Int. J. Approx. Reason. 2014, 55, 1787–1804. [Google Scholar] [CrossRef]
- Hu, Q.H.; Che, X.J.; Zhang, L.; Guo, M.; Yu, D. Rank entropy based decision trees for monotonic classification. IEEE Trans. Knowl. Data Eng. 2012, 24, 2052–2064. [Google Scholar] [CrossRef]
- Liu, J.F.; Hu, Q.H.; Yu, D.R. A weighted rough set based method developed for class imbalance learning. Inf. Sci. 2008, 178, 1235–1256. [Google Scholar] [CrossRef]
- Guo, G.D.; Wang, H.; Bell, D.; Bi, Y.; Greer, K. KNN model-based approach in classification. Lect. Notes Comput. Sci. 2003, 2888, 986–996. [Google Scholar] [CrossRef]
- Li, S.Q.; Harner, E.J.; Adjeroh, D.A. Random knn feature selection—A fast and stable alternative to random forests. BMC Bioinform. 2011, 12, 450. [Google Scholar] [CrossRef] [PubMed]
- Sahigara, F.; Ballabio, D.; Todeschini, R.; Consonni, V. Defining a novel k-nearest neighbours approach to assess the applicability domain of a QSAR model for reliable predictions. J. Chem. 2013, 5, 27–36. [Google Scholar] [CrossRef] [PubMed]
- Lin, G.P.; Liang, J.Y.; Qian, Y.H. Uncertainty measures for multigranulation approximation space. Knowl.-Based Syst. 2015, 23, 443–457. [Google Scholar] [CrossRef]
- Li, M.M.; Zhang, X.Y. Information fusion in a multi-source incomplete information system based on information entropy. Entropy 2017, 19, 570. [Google Scholar] [CrossRef]
- Karevan, Z.; Suykens, J.A.K. Transductive feature selection using clustering-based sample etropy for temperature prediction in weather forecasting. Entropy 2018, 20, 264. [Google Scholar] [CrossRef]
- Ju, H.R.; Li, H.X.; Yang, X.B.; Zhou, X.; Huang, B. Cost-sensitive rough set: A multi-granulation approach. Knowl.-Based Syst. 2017, 123, 137–153. [Google Scholar] [CrossRef]
- Dou, H.L.; Yang, X.B.; Song, X.N.; Yu, H.; Wu, W.Z.; Yang, J. Decision-theoretic rough set: A multicost strategy. Knowl.-Based Syst. 2016, 91, 71–83. [Google Scholar] [CrossRef]
- Jia, X.Y.; Shang, L.; Zhou, B.; Yao, Y. Generalized attribute reduct in rough set theory. Knowl.-Based Syst. 2016, 91, 204–218. [Google Scholar] [CrossRef]
- Li, H.X.; Zhou, X.Z. Risk decision making based on decision-theoretic rough set: A three-way view decision model. Int. J. Comput. Intell. Syst. 2011, 4, 1–11. [Google Scholar] [CrossRef]
- Qian, Y.H.; Liang, J.Y.; Pedrycz, W.; Dang, C. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef]
- Qian, Y.H.; Liang, J.Y.; Pedrycz, W.; Dang, C. An efficient accelerator for attribute reduction from incomplete data in rough set framework. Pattern Recognit. 2011, 44, 1658–1670. [Google Scholar] [CrossRef]
- Jensen, R.; Shen, Q. Fuzzy-rough sets assisted attribute selection. IEEE Trans. Fuzzy Syst. 2007, 15, 73–89. [Google Scholar] [CrossRef] [Green Version]
- Li, J.Z.; Yang, X.B.; Song, X.N.; Li, J.; Wang, P.; Yu, D.J. Neighborhood attribute reduction: A multi-criterion approach. Int. J. Mach. Learn. Cybern. 2017, 1–12. [Google Scholar] [CrossRef]
- Dash, M.; Liu, H. Consistency-based search in feature selection. Artif. Intell. 2003, 151, 155–176. [Google Scholar] [CrossRef]
- Hu, Q.H.; Pan, W.W.; Zhang, L.; Zhang, D.; Song, Y.; Guo, M.; Yu, D. Feature selection for monotonic classification. IEEE Trans. Fuzzy Syst. 2012, 20, 69–81. [Google Scholar] [CrossRef]
- Wilson, D.R.; Martinez, T.R. Reduction techniques for instance-based learning algorithms. Mach. Learn. 2000, 38, 257–286. [Google Scholar] [CrossRef]
- Brighton, H.; Mellish, C. Advances in instance selection for instance-based learning algorithms. Data Min. Knowl. Discov. 2002, 6, 153–172. [Google Scholar] [CrossRef]
- Nikolaidis, K.; Goulermas, J.Y.; Wu, Q.H. A class boundary preserving algorithm for data condensation. Pattern Recognit. 2011, 44, 704–715. [Google Scholar] [CrossRef]
- Aldahdooh, R.T.; Ashour, W. DIMK-means distance-based initialization method for k-means clustering algorithm. Int. J. Intell. Syst. Appl. 2013, 5, 41–51. [Google Scholar] [CrossRef]
- Huang, K.Y. An enhanced classification method comprising a genetic algorithm, rough set theory and a modified PBMF-index function. Appl. Soft. Comput. 2012, 12, 46–63. [Google Scholar] [CrossRef]
- Lingras, P.; Chen, M.; Miao, D. Qualitative and quantitative combinations of crisp and rough clustering schemes using dominance relations. Int. J. Approx. Reason. 2014, 55, 238–258. [Google Scholar] [CrossRef]
- Yang, J.; Ma, Y.; Zhang, X.F.; Li, S.; Zhang, Y. An initialization method based on hybrid distance for k-means algorithm. Neural Comput. 2017, 29, 3094–3117. [Google Scholar] [CrossRef] [PubMed]
- Vashist, R.; Garg, M.L. Rule generation based on reduct and core: A rough set approach. Int. J. Comput. Appl. 2011, 29, 1–5. [Google Scholar] [CrossRef]
- Wang, G.Y.; Ma, X.A.; Yu, H. Monotonic uncertainty measures for attribute reduction in probabilistic rough set model. Int. J. Approx. Reason. 2015, 59, 41–67. [Google Scholar] [CrossRef]
- Peng, H.C.; Long, F.H.; Ding, C. Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
- Azam, N.; Yao, J.T. Game-theoretic rough sets for recommender systems. Knowl.-Based Syst. 2014, 72, 96–107. [Google Scholar] [CrossRef]
- Korytkowski, M.; Rutkowski, L.; Scherer, R. Fast image classification by boosting fuzzy classifiers. Inf. Sci. 2015, 327, 175–182. [Google Scholar] [CrossRef]
- Tsang, E.C.C.; Hu, Q.H.; Chen, D.G. Feature and instance reduction for PNN classfiers based on fuzzy rough sets. Int. J. Mach. Learn. Cybern. 2016, 7, 1–11. [Google Scholar] [CrossRef]
d | ||||||||
---|---|---|---|---|---|---|---|---|
0.8147 | 0.1576 | 0.6557 | 0.7060 | 0.4387 | 0.2760 | 0.7513 | 1 | |
0.9058 | 0.9706 | 0.0357 | 0.0318 | 0.3816 | 0.6797 | 0.2551 | 1 | |
0.1270 | 0.9572 | 0.8491 | 0.2769 | 0.7655 | 0.6551 | 0.5060 | 2 | |
0.9134 | 0.4854 | 0.9340 | 0.0462 | 0.7952 | 0.1626 | 0.6991 | 2 | |
0.6324 | 0.8003 | 0.6787 | 0.0971 | 0.1869 | 0.1190 | 0.8909 | 3 | |
0.0975 | 0.1419 | 0.7577 | 0.8235 | 0.4898 | 0.4984 | 0.9593 | 3 | |
0.2785 | 0.4218 | 0.7431 | 0.6948 | 0.4456 | 0.9597 | 0.5472 | 1 | |
0.5469 | 0.9157 | 0.3922 | 0.3171 | 0.6463 | 0.3404 | 0.1386 | 2 | |
0.9575 | 0.7922 | 0.6555 | 0.9502 | 0.7094 | 0.5853 | 0.1493 | 3 | |
0.9649 | 0.9595 | 0.1712 | 0.0344 | 0.7547 | 0.2238 | 0.2575 | 2 |
ID | Data Sets | Samples | Attributes | Decision Classes |
---|---|---|---|---|
1 | Breast Cancer Wisconsin (Diagnostic) | 569 | 30 | 2 |
2 | Breast Tissue | 106 | 9 | 6 |
3 | Cardiotocography | 2126 | 21 | 10 |
4 | Dermatology | 365 | 34 | 6 |
5 | Forest-Type Mapping | 523 | 27 | 4 |
6 | Hayes Roth | 132 | 4 | 3 |
7 | Ionosphere | 351 | 34 | 2 |
8 | Molecular Biology | 106 | 57 | 2 |
9 | Statlog (Vehicle Silhouettes) | 846 | 18 | 4 |
10 | Vertebral Column | 310 | 6 | 2 |
11 | Wine | 178 | 13 | 3 |
12 | Yeast | 1484 | 8 | 10 |
ID | Datasets | AQRSS | CERSS | MCRSS |
---|---|---|---|---|
1 | Breast Cancer Wisconsin (Diagnostic) | 2,8,10,28 | 2,8,20,28,29 | 2,8,20,28,29 |
2 | Breast Tissue | 2-3 | 2-3,9 | 2-3,9 |
3 | Cardiotocography | 1-2,4-5,8,17,21 | 2,4-5,8,10-12,18 | 1-2,4-5,8,10-12,18 |
4 | Dermatology | 5,15 | 5,15,21 | 5,9,15,21,22 |
5 | Forest-Type Mapping | 1,6-7,16,22 | 1-2,4,6,15-16,22-23,25 | 1-2,4,6-7,15-16,22-23,25 |
6 | Hayes Roth | 1-4 | 1-4 | 1-4 |
7 | Ionosphere | 1,3,12-14,26,30,32-34 | 1,3,11,15,17,19,21,32 | 1,3,11,15,17-22,30,32-34 |
8 | Molecular Biology | 13,38-39,48-49,56 | 13,38-40,48-49 | 13,38-40,48-49,56 |
9 | Statlog (Vehicle Silhouettes) | 4,10,14,16-18 | 1,4-6,8,10,14,16-18 | 1,4-6,8,10,14,16-18 |
10 | Vertebral Column | 2-3,5 | 2-3,5 | 2-3,5 |
11 | Wine | 2,5,7,10-13 | 1-2,4-5,7-8,10-13 | 1-2,4-5,7-8,10-13 |
12 | Yeast | 1-4,6-8 | 1-7 | 1-8 |
Approximation Quality | Conditional Entropy | Classification Accuracy | |||||||
---|---|---|---|---|---|---|---|---|---|
ID | AQRSS | CERSS | MCRSS | AQRSS | CERSS | MCRSS | AQRSS | CERSS | MCRSS |
1 | 0.6173 | 0.6303 | 0.6260 | 0.1660 | 0.1661 | 0.1650 | 0.9332 | 0.9560 | 0.9578 |
2 | 0.0800 | 0.3200 | 0.3200 | 0.8963 | 0.5038 | 0.5038 | 0.1229 | 0.4628 | 0.4628 |
3 | 0.2540 | 0.2408 | 0.2352 | 0.5810 | 0.5718 | 0.5784 | 0.8264 | 0.8810 | 0.9045 |
4 | 0.9945 | 0.9918 | 0.9782 | 0.1450 | 0.0896 | 0.0758 | 0.5219 | 0.7760 | 0.8387 |
5 | 0.2783 | 0.3130 | 0.3130 | 0.4716 | 0.4095 | 0.4249 | 0.7342 | 0.8547 | 0.8910 |
6 | 0.4505 | 0.4505 | 0.4505 | 0.3913 | 0.3913 | 0.3913 | 0.8033 | 0.8033 | 0.8033 |
7 | 0.6441 | 0.5646 | 0.6667 | 0.4374 | 0.3442 | 0.4384 | 0.7749 | 0.8319 | 0.8319 |
8 | 0.4030 | 0.3697 | 0.4030 | 0.3919 | 0.4473 | 0.4285 | 0.6710 | 0.7182 | 0.7182 |
9 | 0.1298 | 0.1403 | 0.1403 | 0.7605 | 0.7013 | 0.7013 | 0.2211 | 0.5757 | 0.5757 |
10 | 0.4882 | 0.4882 | 0.4882 | 0.2799 | 0.2799 | 0.2799 | 0.8719 | 0.8719 | 0.8719 |
11 | 0.8142 | 0.8408 | 0.8408 | 0.1085 | 0.0966 | 0.0966 | 0.9665 | 0.9775 | 0.9775 |
12 | 0.0532 | 0.0443 | 0.0443 | 0.9216 | 0.9203 | 0.9203 | 0.5266 | 0.5213 | 0.5213 |
average | 0.4340 | 0.4495 | 0.4546 | 0.4625 | 0.4101 | 0.4170 | 0.6645 | 0.7713 | 0.7756 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, Y.; Chen, X.; Yang, X.; Wang, P. Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection. Information 2018, 9, 282. https://doi.org/10.3390/info9110282
Gao Y, Chen X, Yang X, Wang P. Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection. Information. 2018; 9(11):282. https://doi.org/10.3390/info9110282
Chicago/Turabian StyleGao, Yuan, Xiangjian Chen, Xibei Yang, and Pingxin Wang. 2018. "Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection" Information 9, no. 11: 282. https://doi.org/10.3390/info9110282
APA StyleGao, Y., Chen, X., Yang, X., & Wang, P. (2018). Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection. Information, 9(11), 282. https://doi.org/10.3390/info9110282