Next Article in Journal
IntraClusTSP—An Incremental Intra-Cluster Refinement Heuristic Algorithm for Symmetric Travelling Salesman Problem
Next Article in Special Issue
The Complexity of Some Classes of Pyramid Graphs Created from a Gear Graph
Previous Article in Journal
1, 2, 3, Many—Perceptual Integration of Motif Repetitions
Previous Article in Special Issue
Maximum Detour–Harary Index for Some Graph Classes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Three-Way Decisions Models with Multi-Granulation Rough Intuitionistic Fuzzy Sets

1
College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China
2
Engineering Lab of Henan Province for Intelligence Business & Internet of Things, Henan Normal University, Xinxiang 453007, China
*
Authors to whom correspondence should be addressed.
Symmetry 2018, 10(11), 662; https://doi.org/10.3390/sym10110662
Submission received: 27 October 2018 / Revised: 11 November 2018 / Accepted: 16 November 2018 / Published: 21 November 2018
(This article belongs to the Special Issue Discrete Mathematics and Symmetry)

Abstract

:
The existing construction methods of granularity importance degree only consider the direct influence of single granularity on decision-making; however, they ignore the joint impact from other granularities when carrying out granularity selection. In this regard, we have the following improvements. First of all, we define a more reasonable granularity importance degree calculating method among multiple granularities to deal with the above problem and give a granularity reduction algorithm based on this method. Besides, this paper combines the reduction sets of optimistic and pessimistic multi-granulation rough sets with intuitionistic fuzzy sets, respectively, and their related properties are shown synchronously. Based on this, to further reduce the redundant objects in each granularity of reduction sets, four novel kinds of three-way decisions models with multi-granulation rough intuitionistic fuzzy sets are developed. Moreover, a series of concrete examples can demonstrate that these joint models not only can remove the redundant objects inside each granularity of the reduction sets, but also can generate much suitable granularity selection results using the designed comprehensive score function and comprehensive accuracy function of granularities.

1. Introduction

Pawlak [1,2] proposed rough sets theory in 1982 as a method of dealing with inaccuracy and uncertainty, and it has been developed into a variety of theories [3,4,5,6]. For example, the multi-granulation rough sets (MRS) model is one of the important developments [7,8]. The MRS can also be regarded as a mathematical framework to handle granular computing, which is proposed by Qian et al. [9]. Thereinto, the problem of granularity reduction is a vital research aspect of MRS. Considering the test cost problem of granularity structure selection in data mining and machine learning, Yang et al. constructed two reduction algorithms of cost-sensitive multi-granulation decision-making system based on the definition of approximate quality [10]. Through introducing the concept of distribution reduction [11] and taking the quality of approximate distribution as the measure in the multi-granulation decision rough sets model, Sang et al. proposed an α-lower approximate distribution reduction algorithm based on multi-granulation decision rough sets, however, the interactions among multiple granularities were not considered [12]. In order to overcome the problem of updating reduction, when the large-scale data vary dynamically, Jing et al. developed an incremental attribute reduction approach based on knowledge granularity with a multi-granulation view [13]. Then other multi-granulation reduction methods have been put forward one after another [14,15,16,17].
The notion of intuitionistic fuzzy sets (IFS), proposed by Atanassov [18,19], was initially developed in the framework of fuzzy sets [20,21]. Within the previous literature, how to get reasonable membership and non-membership functions is a key issue. In the interest of dealing with fuzzy information better, many experts and scholars have expanded the IFS model. Huang et al. combined IFS with MRS to obtain intuitionistic fuzzy MRS [22]. On the basis of fuzzy rough sets, Liu et al. constructed covering-based multi-granulation fuzzy rough sets [23]. Moreover, multi-granulation rough intuitionistic fuzzy cut sets model was structured by Xue et al. [24]. In order to reduce the classification errors and the limitation of ordering by single theory, they further combined IFS with graded rough sets theory based on dominance relation and extended them to a multi-granulation perspective. [25]. Under the optimistic multi-granulation intuitionistic fuzzy rough sets, Wang et al. proposed a novel method to solve multiple criteria group decision-making problems [26]. However, the above studies rarely deal with the optimal granularity selection problem in intuitionistic fuzzy environments. The measure of similarity between intuitionistic fuzzy sets is also one of the hot areas of research for experts, and some similarity measures about IFS are summarized in references [27,28,29], whereas these metric formulas cannot measure the importance degree of multiple granularities in the same IFS.
For further explaining the semantics of decision-theoretic rough sets (DTRS), Yao proposed a three-way decisions theory [30,31], which vastly pushed the development of rough sets. As a risk decision-making method, the key strategy of three-way decisions is to divide the domain into acceptance, rejection, and non-commitment. Up to now, researchers have accumulated a vast literature on its theory and application. For instance, in order to narrow the applications limits of three-way decisions model in uncertainty environment, Zhai et al. extended the three-way decisions models to tolerance rough fuzzy sets and rough fuzzy sets, respectively, the target concepts are relatively extended to tolerance rough fuzzy sets and rough fuzzy sets [32,33]. To accommodate the situation where the objects or attributes in a multi-scale decision table are sequentially updated, Hao et al. used sequential three-way decisions to investigate the optimal scale selection problem [34]. Subsequently, Luo et al. applied three-way decisions theory to incomplete multi-scale information systems [35]. With respect to multiple attribute decision-making, Zhang et al. study the inclusion relations of neutrosophic sets in their case in reference [36]. For improving the classification correct rate of three-way decisions, Zhang et al. proposed a novel three-way decisions model with DTRS by considering the new risk measurement functions through the utility theory [37]. Yang et al. combined three-way decisions theory with IFS to obtain novel three-way decision rules [38]. At the same time, Liu et al. explored the intuitionistic fuzzy three-way decision theory based on intuitionistic fuzzy decision systems [39]. Nevertheless, Yang et al. [38] and Liu et al. [39] only considered the case of a single granularity, and did not analyze the decision-making situation of multiple granularities in an intuitionistic fuzzy environment. The DTRS and three-way decisions theory are both used to deal with decision-making problems, so it is also enlightening for us to study three-way decisions theory through DTRS. An extension version that can be used to multi-periods scenarios has been introduced by Liang et al. using intuitionistic fuzzy decision- theoretic rough sets [40]. Furthermore, they introduced the intuitionistic fuzzy point operator into DTRS [41]. The three-way decisions are also applied in multiple attribute group decision making [42], supplier selection problem [43], clustering analysis [44], cognitive computer [45], and so on. However, they have not applied the three-way decisions theory to the optimal granularity selection problem. To solve this problem, we have expanded the three-way decisions models.
The main contributions of this paper include four points:
(1) The new granularity importance degree calculating methods among multiple granularities (i.e., s i g i n , Δ ( A i , A , D ) and s i g o u t , Δ ( A i , A , D ) ) are given respectively, which can generate more discriminative granularities.
(2) Optimistic optimistic multi-granulation rough intuitionistic fuzzy sets (OOMRIFS) model, optimistic pessimistic multi-granulation rough intuitionistic fuzzy sets (OIMRIFS) model, pessimistic optimistic multi-granulation rough intuitionistic fuzzy sets (IOMRIFS) model and pessimistic pessimistic multi-granulation rough intuitionistic fuzzy sets (IIMRIFS) model are constructed by combining intuitionistic fuzzy sets with the reduction of the optimistic and pessimistic multi-granulation rough sets. These four models can reduce the subjective errors caused by a single intuitionistic fuzzy set.
(3) We put forward four kinds of three-way decisions models based on the proposed four multi-granulation rough intuitionistic fuzzy sets (MRIFS), which can further reduce the redundant objects in each granularity of reduction sets.
(4) Comprehensive score function and comprehensive accuracy function based on MRIFS are constructed. Based on this, we can obtain the optimal granularity selection results.
The rest of this paper is organized as follows. In Section 2, some basic concepts of MRS, IFS, and three-way decisions are briefly reviewed. In Section 3, we propose two new granularity importance degree calculating methods and a granularity reduction Algorithm 1. At the same time, a comparative example is given. Four novel MRIFS models are constructed in Section 4, and the properties of the four models are verified by Example 2. Section 5 proposes some novel three-way decisions models based on above four new MRIFS, and the comprehensive score function and comprehensive accuracy function based on MRIFS are built. At the same time, through Algorithm 2, we make the optimal granularity selection. In Section 6, we use Example 3 to study and illustrate the three-way decisions models based on new MRIFS. Section 7 concludes this paper.

2. Preliminaries

The basic notions of MRS, IFS, and three-way decisions theory are briefly reviewed in this section. Throughout the paper, we denote U as a nonempty object set, i.e., the universe of discourse and A = { A 1 ,   A 2 ,   ,   A m } is an attribute set.
Definition 1
([9]). Suppose I S = < U , A , V , f > is a consistent information system, A = { A 1 ,   A 2 ,   ,   A m } is an attribute set. And R A i is an equivalence relation generated by A. [ x ] A i is the equivalence class of R A i , X U , the lower and upper approximations of optimistic multi-granulation rough sets (OMRS) of X are defined by the following two formulas:
i = 1 m A i ¯ O ( X ) = { x U | [ x ] A 1 X [ x ] A 2 X [ x ] A 3 X [ x ] A m X } ; i = 1 m A i ¯ O ( X ) = ~ ( i = 1 m A i ¯ O ( ~ X ) ) .
where is a disjunction operation, X is a complement of X, if i = 1 m A i ¯ O ( X ) i = 1 m A i ¯ O ( X ) , the pair ( i = 1 m A i ¯ O ( X ) ,   i = 1 m A i ¯ O ( X ) ) is referred to as an optimistic multi-granulation rough set of X.
Definition 2
([9]). Let I S = < U , A , V , f > be an information system, where A = { A 1 ,   A 2 ,   ,   A m } is an attribute set, and R A i is an equivalence relation generated by A. [ x ] A i is the equivalence class of R A i , X U , the pessimistic multi-granulation rough sets (IMRS) of X with respect to A are defined as follows:
i = 1 m A i ¯ I ( X ) = { x U | [ x ] A 1 X [ x ] A 2 X [ x ] A 3 X [ x ] A m X } ; i = 1 m A i ¯ I ( X ) = ~ ( i = 1 m A i ¯ I ( ~ X ) ) .  
where [ x ] A i ( 1 i m ) is equivalence class of x for A i , is a conjunction operation, if i = 1 m A i ¯ I ( X ) i = 1 m A i ¯ I ( X ) , the pair ( i = 1 m A i ¯ I ( X ) ,   i = 1 m A i ¯ I ( X ) ) is referred to as a pessimistic multi-granulation rough set of X.
Definition 3
([18,19]). Let U be a finite non-empty universe set, then the IFS E in U are denoted by:
E = { < x ,   μ E ( x ) , ν E ( x ) > | x U } ,
where μ E ( x ) : U [ 0 , 1 ] and ν E ( x ) : U [ 0 , 1 ] . μ E ( x ) and ν E ( x ) are called membership and non-mem- bership functions of the element x in E with 0 μ E ( x ) + ν E ( x ) 1 . For   x U , the hesitancy degree function is defined as π E ( x ) = 1 μ E ( x ) ν E ( x ) , obviously, π E ( x ) :   U [ 0 , 1 ] . Suppose   E 1 ,   E 2 I F S ( U ) , the basic operations of E 1 and E 2 are given as follows:
(1)
E 1 E 2 μ E 1 ( x ) μ E 2 ( x ) ,   ν E 1 ( x ) ν E 2 ( x ) ,   x U ;
(2)
A = B μ A ( x ) = μ B ( x ) ,   ν A ( x ) = ν B ( x ) ,   x U ;
(3)
E 1 E 2 = { < x ,   max { μ E 1 ( x ) ,   μ E 2 ( x ) } ,   min { ν E 1 ( x ) ,   ν E 2 ( x ) } > | x U } ;
(4)
( 4 )   E 1 E 2 = { < x ,   min { μ E 1 ( x ) ,   μ E 2 ( x ) } ,   max { ν E 1 ( x ) ,   ν E 2 ( x ) } > | x U } ;
(5)
( 5 )   E 1 = { < x ,   ν E 1 ( x ) ,   μ E 1 ( x ) > | x U } .
Definition 4
([30,31]). Let U = { x 1 , x 2 , , x n } be a universe of discourse, ξ = { ω P , ω N , ω B } represents the decisions of dividing an object x into receptive P O S ( X ) , rejective N E G ( X ) , and boundary regions B N D ( X ) , respectively. The cost functions λ P P , λ N P and λ B P are used to represent the three decision- making costs of x U , and the cost functions λ P N , λ N N and λ B N are used to represent the three decision-making costs of x U , as shown in Table 1.
According to the minimum-risk principle of Bayesian decision procedure, three-way decisions rules can be obtained as follows:
(P): If P ( X | [ x ] ) α , then x P O S ( X ) ;
(N): If P ( X | [ x ] ) β , then x N E G ( X ) ;
(B): If β < P ( X | [ x ] ) < α , then x B N D ( X ) .
Here α , β and γ represent respectively:
α = λ P N λ B N ( λ P N λ B N ) + ( λ B P λ P P ) ;  
β = λ B N λ N N ( λ B N λ N N ) + ( λ N P λ B P ) ;  
γ = λ P N λ N N ( λ P N λ N N ) + ( λ N P λ P P ) .  

3. Granularity Reduction Algorithm Derives from Granularity Importance Degree

Definition 5
([10,12]). Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of condition attributes C. U / D = { X 1 , X 2 , , X s } is the partition induced by the decision attributes D, then approximation quality of U / D about granularity set A is defined as:
γ ( A , D ) = | { i = 1 m A i ¯ Δ ( X t ) | 1 t s } | | U | .  
where | X | denotes the cardinal number of set X. Δ { O , I } represents two cases of optimistic and pessimistic multi-granulation rough sets, the same as the following.
Definition 6
([12]). Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A , X U / D ,
(1) If i = 1 , A i A m A i ¯ Δ ( X ) i = 1 , A i A A m A i ¯ Δ ( X ) , then A is important in A for X;
(2) If i = 1 , A i A m A i ¯ Δ ( X ) = i = 1 , A i A A m A i ¯ Δ ( X ) , then A is not important in A for X.
Definition 7
([10,12]). Suppose D I S = ( U , C D , V , f ) is a decision information system, A = { A 1 , A 2 , , A m } are m sub-attributes of C, A A . A i A , on the granularity sets A , the internal importance degree of Ai for D can be defined as follows:
s i g i n Δ ( A i , A , D ) =   | γ ( A , D ) γ ( A { A i } , D ) | .  
Definition 8
([10,12]). Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , , A m } are m sub-attributes of C, A A . A i A A , on the granularity sets A , the external importance degree of Ai for D can be defined as follows:
s i g o u t Δ ( A i , A , D ) =   | γ ( A i A , D ) γ ( A , D ) | .  
Theorem 1.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A .
(1) For A i A , on the basis of attribute subset family A , the granularity importance degree of A i in A with respect to D is expressed as follows:
s i g i n Δ ( A i , A , D ) = 1 m 1 | s i g i n Δ ( { A k , A i } , A , D ) s i g i n Δ ( A k , A { A i } , D ) | .  
where 1 k m , k i , the same as the following.
(2) For A i A A , on the basis of attribute subset family A , the granularity importance degree of A i in A A with respect to D, we have:
s i g o u t Δ ( A i , A , D ) = 1 m 1 | s i g o u t Δ ( { A k , A i } , { A i } A , D ) s i g o u t Δ ( A k , A , D ) | .  
Proof. 
(1) According to Definition 7, then
s i g i n Δ ( A i , A , D ) = | γ ( A , D ) γ ( A { A i } , D ) | = m 1 m 1 | γ ( A , D ) γ ( A { A i } , D ) | + | γ ( A { A k , A i } , D ) γ ( A { A k , A i } , D ) | = 1 m 1 ( | γ ( A , D ) γ ( A { A k , A i } , D ) ( γ ( A { A i } , D ) γ ( A { A k , A i } , D ) | ) = 1 m 1 | s i g i n Δ ( { A k , A i } , A , D ) s i g i n Δ ( A k , A { A i } , D ) | .
(2) According to Definition 8, we can get:
s i g o u t Δ ( A i , A , D ) = | γ ( { A i } A , D ) γ ( A , D ) | = m 1 m 1 | γ ( { A i } A , D ) γ ( A , D ) | | γ ( A { A k } , D ) γ ( A { A k } , D ) | = 1 m 1 ( | γ ( { A i } A , D ) γ ( A { A k } , D ) | | ( γ ( A { A k } , D ) γ ( A , D ) | ) = 1 m 1 | s i g o u t Δ ( { A k , A i } , { A i } A , D ) s i g o u t Δ ( A k , A , D ) | .
 □
In Definitions 7 and 8, only the direct effect of a single granularity on the whole granularity sets is given, without considering the indirect effect of the remaining granularities on decision-making. The following Definitions 9 and 10 synthetically analyze the interdependence between multiple granularities and present two new methods for calculating granularity importance degree.
Definition 9.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A . A i , A k A , on the attribute subset family, A, the new internal importance degree of Ai relative to D is defined as follows:
s i g i n , Δ ( A i , A , D ) = s i g i n Δ ( A i , A , D ) + 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | .  
s i g i n Δ ( A i , A , D ) and 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | respectively indicate the direct and indirect effects of granularity Ai on decision-making. When | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | > 0 is satisfied, it is shown that the granularity importance degree of Ak is increased by the addition of Ai in attribute subset A { A i } , so the granularity importance degree of Ak should be added to Ai. Therefore, when there are m sub-attributes, we should add 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | to the granularity importance degree of Ai.
If | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | = 0 and k i , then it shows that there is no interaction between granularity Ai and other granularities, which means s i g i n , Δ ( A i , A , D ) = s i g i n Δ ( A i , A , D ) .
Definition 10.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } be m sub-attributes of C, A A . A i A A , the new external importance degree of Ai relative to D is defined as follows:
s i g o u t , Δ ( A i , A , D ) = s i g o u t Δ ( A i , A , D ) + 1 m 1 | s i g o u t Δ ( A k , A , D ) s i g o u t Δ ( A k , { A i } A , D ) | .  
Similarly, the new external importance degree calculation formula has a similar effect.
Theorem 2.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } be m sub-attributes of C, A A , A i A . The improved internal importance can be rewritten as:
s i g i n , Δ ( A i , A , D ) = 1 m 1 s i g i n Δ ( A i , A { A k } , D ) .  
Proof. 
s i g i n , Δ ( A i , A , D ) = s i g i n Δ ( A i , A , D ) + 1 m 1 | s i g i n Δ ( A k , A { A i } , D ) s i g i n Δ ( A k , A , D ) | = m 1 m 1 | γ ( A , D ) γ ( A { A i } , D ) | + 1 m 1 | | γ ( A { A i } , D ) γ ( A { A k , A i } , D ) | | γ ( A , D ) γ ( A { A k } , D ) | | = 1 m 1 | γ ( A { A k } , D ) γ ( A { A k , A i } , D ) | = 1 m 1 s i g i n Δ ( A i , A { A k } , D ) .
 □
Theorem 3.
Let D I S = ( U , C D , V , f ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A . The improved external importance can be expressed as follows:
s i g o u t , Δ ( A i , A , D ) = 1 m 1 s i g o u t Δ ( A i , { A k } A , D ) .  
Proof. 
s i g o u t , Δ ( A i , A , D ) = s i g o u t Δ ( A i , A , D ) + 1 m 1 | ( s i g o u t Δ ( A k , A , D ) s i g o u t Δ ( A k , { A i } A , D ) ) | = m 1 m 1 | γ ( { A i } A , D ) γ ( A , D ) | + 1 m 1 | | γ ( A , D ) γ ( { A k } A , D ) | | γ ( { A i } A , D ) | | = 1 m 1 | γ ( { A i , A k } A , D ) γ ( { A i } A , D ) | = 1 m 1 s i g o u t Δ ( A i , { A k } A , D ) .
 □
Theorems 2 and 3 show that when s i g i n Δ ( A i , A { A k } , D ) = 0   ( s i g o u t Δ ( A i , { A k } A , D ) = 0 ) is satisfied, having s i g i n , Δ ( A i , A , D ) = 0   ( s i g o u t , Δ ( A i , A , D ) = 0 ) . And each granularity importance degree is calculated on the basis of removing Ak from A , which makes it more convenient for us to choose the required granularity.
According to [10,12], we can get optimistic and pessimistic multi-granulation lower approximations L O and L I . The granularity reduction algorithm based on improved granularity importance degree is derived from Theorems 2 and 3, as shown in Algorithm 1.
Algorithm 1. Granularity reduction algorithm derives from granularity importance degree
Input: D I S = ( U , C D , V , f ) , A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C, A A , A i A , U / D = { X 1 , X 2 , , X s } ;
Output: A granularity reduction set A i Δ of this information system.
1: set up A i Δ ϕ , 1 h m ;
2: compute U / D , optimistic and pessimistic multi-granulation lower approximations L Δ ;
3: for A i A
4:  compute s i g i n , Δ ( A i , A , D ) via Definition 9;
5:  if ( s i g i n , Δ ( A i , A , D ) > 0 ) then A i Δ = A i Δ A i ;
6:  end
7:  for A i A A i Δ
8:    if γ ( A i Δ , D ) = γ ( A , D ) then compute s i g o u t , Δ ( A i , A , D ) via Definition 10;
9:    end
10:   if s i g o u t , Δ ( A h , A , D ) = max { s i g o u t , Δ ( A h , A , D ) } then A i Δ = A i Δ A h ;
11:   end
12: end
13: for A i A i Δ ,
14:   if γ ( A i Δ A i , D ) = γ ( A , D ) then A i Δ = A i Δ A i ;
15:   end
16: end
17: return granularity reduction set A i Δ ;
18: end
Therefore, we can obtain two reductions by utilizing Algorithm 1.
Example 1.
This paper calculates the granularity importance of 10 on-line investment schemes given in Reference [12]. After comparing and analyzing the obtained granularity importance degree, we can obtain the reduction results of 5 evaluation sites through Algorithm 1, and the detailed calculation steps are as follows.
According to [12], we can get A = { A 1 , A 2 , A 3 , A 4 , A 5 } ,   A A ,   U / D = { { x 1 , x 2 , x 4 , x 6 , x 8 } , { x 3 , x 5 , x 7 , x 9 , x 10 } } .
(1)
Reduction set of OMRS
First of all, we can calculate the internal importance degree of OMRS by Theorem 2 as shown in Table 2.
Then, according to Algorithm 1, we can deduce the initial granularity set is { A 1 , A 2 , A 3 } . Inspired by Definition 5, we obtain r O ( { A 2 , A 3 } , D ) = r O ( A , D ) = 1 . So, the reduction set of the OMRS is A i O = { A 2 , A 3 } .
As shown in Table 2, when using the new method to calculate internal importance degree, more discriminative granularities can be generated, which are more convenient for screening out the required granularities. In literature [12], the approximate quality of granularity A2 in the reduction set is different from that of the whole granularity set, so it is necessary to calculate the external importance degree again. When calculating the internal and external importance degree, References [10,12] only considered the direct influence of the single granularity on the granularity A2, so the influence of the granularity A2 on the overall decision-making can’t be fully reflected.
(2)
Reduction set of IMRS
Similarly, by using Theorem 2, we can get the internal importance degree of each site under IMRS, as shown in Table 3.
According to Algorithm 1, the sites 2, 4, and 5 with internal importance degrees greater than 0, which are added to the granularity reduction set as the initial granularity set, and then the approximate quality of it can be calculated as follows:
r I ( { A 2 , A 4 } , D ) = r I ( { A 4 , A 5 } , D ) = r I ( A , D ) = 0.2.  
Namely, the reduction set of IMRS is A i I = { A 2 , A 4 } or A i I = { A 4 , A 5 } without calculating the external importance degree.
In this paper, when calculating the internal and external importance degree of each granularity, the influence of removing other granularities on decision-making is also considered. According to Theorem 2, after calculating the internal importance degree of OMRS and IMRS, if the approximate quality of each granularity in the reduction sets are the same as the overall granularities, it is not necessary to calculate the external importance degree again, which can reduce the amount of computation.

4. Novel Multi-Granulation Rough Intuitionistic Fuzzy Sets Models

In Example 1, two reduction sets are obtained under IMRS, so we need a novel method to obtain more accurate granularity reduction results by calculating granularity reduction.
In order to obtain the optimal determined site selection result, we combine the optimistic and pessimistic multi-granulation reduction sets based on Algorithm 1 with IFS, respectively, and construct the following four new MRIFS models.
Definition 11
([22,25]). Suppose I S = ( U , A , V , f ) is an information system, A = { A 1 ,   A 2 , ,   A m } . E U , E are IFS. Then the lower and upper approximations of optimistic MRIFS of A i are respectively defined by:
i = 1 m R A i ¯ O ( E ) = { < x ,   μ i = 1 m R A i ¯ O ( E ) ( x ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) > | x U } ; i = 1 m R A i ¯ O ( E ) = { < x ,   μ i = 1 m R A i ¯ O ( E ) ( x ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) > | x U } .
where
μ i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m inf y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m sup y [ x ] A i ν E ( y ) ; μ i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m sup y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ O ( E ) ( x ) = i = 1 m inf y [ x ] A i ν E ( y ) .
where R A i is an equivalence relation of x in A, [ x ] A i is the equivalence class of R A i ,and is a disjunction operation.
Definition 12
([22,25]). Suppose I S = < U , A , V , f > is an information system, A = { A 1 ,   A 2 , ,   A m } . E U , E are IFS. Then the lower and upper approximations of pessimistic MRIFS of Ai can be described as follows:
i = 1 m R A i ¯ I ( E ) = { < x ,   μ i = 1 m R A i ¯ I ( E ) ( x ) ,   ν i = 1 m R A i ¯ I ( E ) ) ( x ) > | x U } ; i = 1 m R A i ¯ I ( E ) = { < x ,   μ i = 1 m R A i ¯ I ( E ) ( x ) ,   ν i = 1 m R A i ¯ I ( E ) ( x ) > | x U } .
where
μ i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m inf y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m sup y [ x ] A i ν E ( y ) ; μ i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m sup y [ x ] A i μ E ( y ) ,   ν i = 1 m R A i ¯ I ( E ) ( x ) = i = 1 m inf y [ x ] A i ν E ( y ) .
where [ x ] A i is the equivalence class of x about the equivalence relation R A i , and is a conjunction operation.
Definition 13.
Suppose I S = < U , A , V , f > is an information system, A i O = { A 1 , A 2 , , A r } A , A = { A 1 ,   A 2 , ,   A m } . And R A i O is an equivalence relation of x with respect to the attribute reduction set A i O under OMRS, [ x ] A i O is the equivalence class of R A i O . Let E be IFS of U and they can be characterized by a pair of lower and upper approximations:
i = 1 r R A i O ¯ O ( E ) = { < x , μ i = 1 r R A i O ¯ O ( E ) ( x ) , ν i = 1 r R A i O ¯ O ( E ) ( x ) > | x U } ; i = 1 r R A i O ¯ O ( E ) = { < x , μ i = 1 r R A i O ¯ O ( E ) ( x ) , ν i = 1 r R A i O ¯ O ( E ) ( x ) > | x U } .
where
μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O ν E ( y ) ; μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) .  
If i = 1 r R A i O ¯ O ( E ) i = 1 r R A i O ¯ O ( E ) , then E can be called OOMRIFS.
Definition 14.
Suppose I S = < U , A , V , f > is an information system, E U , E are IFS. A i O = { A 1 , A 2 , , A r } A , A = { A 1 ,   A 2 , ,   A m } . where A i O is an optimistic multi-granulation attribute reduction set. Then the lower and upper approximations of pessimistic MRIFS under optimistic multi-granulation environment can be defined as follows:
i = 1 r R A i O ¯ I ( E ) = { < x , μ i = 1 r R A i O ¯ I ( E ) ( x ) , ν i = 1 r R A i O ¯ I ( E ) ( x ) > | x U } ; i = 1 r R A i O ¯ I ( E ) = { < x , μ i = 1 r R A i O ¯ I ( E ) ( x ) , ν i = 1 r R A i O ¯ I ( E ) ( x ) > | x U } .
where
μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O ν E ( y ) ; μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) ,   ν i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) .
The pair ( i = 1 r R A i O ¯ I ( E ) , i = 1 r R A i O ¯ I ( E ) ) are called OIMRIFS, if i = 1 r R A i O ¯ I ( E ) i = 1 r R A i O ¯ I ( E ) .
According to Definitions 13 and 14, the following theorem can be obtained.
Theorem 4.
Let I S = < U , A , V , f > be an information system, A i O = { A 1 , A 2 , , A r } A ,   A = { A 1 , A 2 , , A m } , and E 1 , E 2 be IFS on U. Comparing with Definitions 13 and 14, the following proposition is obtained.
(1)
i = 1 r R A i O ¯ O ( E 1 ) = i = 1 r R A i O ¯ O ( E 1 ) ;
(2)
i = 1 r R A i O ¯ O ( E 1 ) = i = 1 r R A i O ¯ O ( E 1 ) ;
(3)
i = 1 r R A i O ¯ I ( E 1 ) = i = 1 r R A i O ¯ I ( E 1 ) ;
(4)
i = 1 r R A i O ¯ I ( E 1 ) = i = 1 r R A i O ¯ I ( E 1 ) ;
(5)
i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ O ( E 1 ) ;
(6)
i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ I ( E 1 ) ;
(7)
i = 1 r R A i O ¯ O ( E 1 E 2 ) = i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) = i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) ;
(8)
i = 1 r R A i O ¯ O ( E 1 E 2 ) = i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) = i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) ;
(9)
i = 1 r R A i O ¯ O ( E 1 E 2 ) i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) ;
(10)
i = 1 r R A i O ¯ O ( E 1 E 2 ) i = 1 r R A i O ¯ O ( E 1 ) i = 1 r R A i O ¯ O ( E 2 ) , i = 1 r R A i O ¯ I ( E 1 E 2 ) i = 1 r R A i O ¯ I ( E 1 ) i = 1 r R A i O ¯ I ( E 2 ) .
Proof. 
It is easy to prove by the Definitions 13 and 14. □
Definition 15.
Let I S = < U , A , V , f > be an information system, and E be IFS on U. A i I = { A 1 , A 2 , , A r } A , A = { A 1 ,   A 2 , ,   A m } , where A i I is a pessimistic multi-granulation attribute reduction set. Then, the pessimistic optimistic lower and upper approximations of E with respect to equivalence relation R A i I are defined by the following formulas:
i = 1 r R A i I ¯ O ( E ) = { < x , μ i = 1 r R A i I ¯ O ( E ) ( x ) , ν i = 1 r R A i I ¯ O ( E ) ( x ) > | x U } ; i = 1 r R A i I ¯ O ( E ) = { < x , μ i = 1 r R A i I ¯ O ( E ) ( x ) , ν i = 1 r R A i I ¯ O ( E ) ( x ) > | x U } .
where
μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) ; μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) .  
If i = 1 r R A i I ¯ O ( E ) i = 1 r R A i I ¯ O ( E ) , then E can be called IOMRIFS.
Definition 16.
Let I S = < U , A , V , f > be an information system, and E be IFS on U. A i I = { A 1 , A 2 , , A r } A ,   A = { A 1 ,   A 2 , ,   A m } , where A i I is a pessimistic multi-granulation attribute reduction set. Then, the pessimistic lower and upper approximations of E under IMRS are defined by the following formulas:
i = 1 r R A i I ¯ I ( E ) = { < x , μ i = 1 r R A i I ¯ I ( E ) ( x ) , ν i = 1 r R A i I ¯ I ( E ) ( x ) > | x U } ; i = 1 r R A i I ¯ I ( E ) = { < x , μ i = 1 r R A i I ¯ I ( E ) ( x ) , ν i = 1 r R A i I ¯ I ( E ) ( x ) > | x U } .  
where
μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) ;   μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) ,   ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) .  
where R A i I is an equivalence relation of x about the attribute reduction set A i I under IMRS, [ x ] A i O is the equivalence class of R A i I .
If i = 1 r R A i I ¯ I ( E ) i = 1 r R A i I ¯ I ( E ) , then the pair ( i = 1 r R A i I ¯ I ( E ) , i = 1 r R A i I ¯ I ( E ) ) is said to be IIMRIFS.
According to Definitions 15 and 16, the following theorem can be captured.
Theorem 5.
Let I S = < U , A , V , f > be an information system, A i I = { A 1 , A 2 , , A r } A ,   A = { A 1 , A 2 , , A m } , and E 1 , E 2 be IFS on U. Then IOMRIFS and IIOMRIFS models have the following properties:
(1)
i = 1 r R A i I ¯ O ( E 1 ) = i = 1 r R A i I ¯ O ( E 1 ) ;
(2)
i = 1 r R A i I ¯ O ( E 1 ) = i = 1 r R A i I ¯ O ( E 1 ) ;
(3)
i = 1 r R A i I ¯ I ( E 1 ) = i = 1 r R A i I ¯ I ( E 1 ) ;
(4)
i = 1 r R A i I ¯ I ( E 1 ) = i = 1 r R A i I ¯ I ( E 1 ) ;
(5)
i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ O ( E 1 ) ;
(6)
i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ I ( E 1 ) .
(7)
i = 1 r R A i I ¯ O ( E 1 E 2 ) = i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) = i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) ;
(8)
i = 1 r R A i I ¯ O ( E 1 E 2 ) = i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) = i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) ;
(9)
i = 1 r R A i I ¯ O ( E 1 E 2 ) i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) ;
(10)
i = 1 r R A i I ¯ O ( E 1 E 2 ) i = 1 r R A i I ¯ O ( E 1 ) i = 1 r R A i I ¯ O ( E 2 ) , i = 1 r R A i I ¯ I ( E 1 E 2 ) i = 1 r R A i I ¯ I ( E 1 ) i = 1 r R A i I ¯ I ( E 2 ) .
Proof. 
It can be derived directly from Definitions 15 and 16. □
The characteristics of the proposed four models are further verified by Example 2 below.
Example 2.
(Continued with Example 1). From Example 1, we know that these 5 sites are evaluated by 10 investment schemes respectively. Suppose they have the following IFS with respect to 10 investment schemes
E = { [ 0.25 , 0.43 ] x 1 , [ 0.51 , 0.28 ] x 2 , [ 0.54 , 0.38 ] x 3 , [ 0.37 , 0.59 ] x 4 , [ 0.49 , 0.35 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.72 , 0.12 ] x 9 , [ 0.67 , 0.23 ] x 10 } .
(1) In OOMRIFS, the lower and upper approximations of OOMRIFS can be calculated as follows:
i = 1 r R A i O ¯ O ( E ) = { [ 0.25 , 0.59 ] x 1 , [ 0.49 , 0.38 ] x 2 , [ 0.49 , 0.38 ] x 3 , [ 0.25 , 0.59 ] x 4 , [ 0.49 , 0.38 ] x 5 , [ 0.25 , 0.46 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.15 , 0.46 ] x 9 , [ 0.67 , 0.23 ] x 10 } , i = 1 r R A i O ¯ O ( E ) = { [ 0.51 , 0.28 ] x 1 , [ 0.51 , 0.28 ] x 2 , [ 0.54 , 0.35 ] x 3 , [ 0.51 , 0.28 ] x 4 , [ 0.54 , 0.35 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.54 , 0.35 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.72 , 0.12 ] x 9 , [ 0.67 , 0.23 ] x 10 } .
(2) Similarly, in OIMRIFS, we have:
i = 1 r R A i O ¯ I ( E ) = { [ 0.25 , 0.59 ] x 1 , [ 0.25 , 0.59 ] x 2 , [ 0.09 , 0.86 ] x 3 , [ 0.25 , 0.59 ] x 4 , [ 0.09 , 0.86 ] x 5 , [ 0.15 , 0.59 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.09 , 0.86 ] x 9 , [ 0.09 , 0.86 ] x 10 } , i = 1 r R A i O ¯ I ( E ) = { [ 0.92 , 0.04 ] x 1 , [ 0.54 , 0.28 ] x 2 , [ 0.54 , 0.28 ] x 3 , [ 0.92 , 0.04 ] x 4 , [ 0.54 , 0.28 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.72 , 0.12 ] x 7 , [ 0.92 , 0.04 ] x 8 , [ 0.92 , 0.04 ] x 9 , [ 0.72 , 0.12 ] x 10 } .
From the above results, Figure 1 can be drawn as follows:
Note that
μ 1 = μ O O ¯ ( x j ) and ν 1 = ν O O ¯ ( x j ) represent the lower approximation of OOMRIFS;
μ 2 = μ O O ¯ ( x j ) and ν 2 = ν O O ¯ ( x j ) represent the upper approximation of OOMRIFS;
μ 3 = μ O I ¯ ( x j ) and ν 3 = ν O I ¯ ( x j ) represent the lower approximation of OIMRIFS;
μ 4 = μ O I ¯ ( x j ) and ν 4 = ν O I ¯ ( x j ) represent the upper approximation of OIMRIFS.
Regarding Figure 1, we can get,
μ O I ¯ ( x j ) μ O O ¯ ( x j ) μ O O ¯ ( x j ) μ O I ¯ ( x j ) ;   ν O I ¯ ( x j ) ν O O ¯ ( x j ) ν O O ¯ ( x j ) ν O I ¯ ( x j ) .
As shown in Figure 1, the rules of Theorem 4 are satisfied. By constructing the OOMRIFS and OIMRIFS models, we can reduce the subjective scoring errors of experts under intuitionistic fuzzy conditions.
(3) Similar to (1), in IOMRIFS, we have:
i = 1 r R A i I ¯ O ( E ) = { [ 0.25 , 0.43 ] x 1 , [ 0.25 , 0.43 ] x 2 , [ 0.25 , 0.43 ] x 3 , [ 0.37 , 0.59 ] x 4 , [ 0.25 , 0.43 ] x 5 , [ 0.25 , 0.46 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.15 , 0.46 ] x 8 , [ 0.67 , 0.23 ] x 9 , [ 0.67 , 0.23 ] x 10 } ,
i = 1 r R A i I ¯ O ( E ) = { [ 0.51 , 0.28 ] x 1 , [ 0.51 , 0.28 ] x 2 , [ 0.54 , 0.35 ] x 3 , [ 0.37 , 0.59 ] x 4 , [ 0.49 , 0.35 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.51 , 0.35 ] x 7 , [ 0.49 , 0.35 ] x 8 , [ 0.72 , 0.12 ] x 9 , [ 0.67 , 0.23 ] x 10 } .
(4) The same as (1), in IIMRIFS, we can get:
i = 1 r R A i I ¯ I ( E ) = { [ 0.25 , 0.59 ] x 1 , [ 0.09 , 0.86 ] x 2 , [ 0.09 , 0.86 ] x 3 , [ 0.25 , 0.59 ] x 4 , [ 0.09 , 0.86 ] x 5 , [ 0.09 , 0.86 ] x 6 , [ 0.09 , 0.86 ] x 7 , [ 0.09 , 0.86 ] x 8 , [ 0.15 , 0.46 ] x 9 , [ 0.67 , 0.23 ] x 10 } ,
i = 1 r R A i I ¯ I ( E ) = { [ 0.92 , 0.04 ] x 1 , [ 0.54 , 0.28 ] x 2 , [ 0.92 , 0.04 ] x 3 , [ 0.92 , 0.04 ] x 4 , [ 0.54 , 0.28 ] x 5 , [ 0.92 , 0.04 ] x 6 , [ 0.92 , 0.04 ] x 7 , [ 0.92 , 0.04 ] x 8 , [ 0.92 , 0.04 ] x 9 , [ 0.72 , 0.12 ] x 10 } .
From (3) and (4), we can obtain Figure 2 as shown:
Note that
μ 5 = μ I O ¯ ( x j ) and ν 5 = ν I O ¯ ( x j ) represent the lower approximation of IOMRIFS;
μ 6 = μ I O ¯ ( x j ) and ν 6 = ν I O ¯ ( x j ) represent the upper approximation of IOMRIFS;
μ 7 = μ I I ¯ ( x j ) and ν 7 = ν I I ¯ ( x j ) represent the lower approximation of IIMRIFS;
μ 8 = μ I I ¯ ( x j ) and ν 8 = ν I I ¯ ( x j ) represent the upper approximation of IIMRIFS.
For Figure 2, we can get,
μ I I ¯ ( x j ) μ I O ¯ ( x j ) μ I O ¯ ( x j ) μ I I ¯ ( x j ) ;   ν I I ¯ ( x j ) ν I O ¯ ( x j ) ν I O ¯ ( x j ) ν I I ¯ ( x j ) .
As shown in Figure 2, the rules of Theorem 5 are satisfied.
Through the Example 2, we can obtain four relatively more objective MRIFS models, which are beneficial to reduce subjective errors.

5. Three-Way Decisions Models Based on MRIFS and Optimal Granularity Selection

In order to obtain the optimal granularity selection results in the case of optimistic and pessimistic multi-granulation sets, it is necessary to further distinguish the importance degree of each granularity in the reduction sets. We respectively combine the four MRIFS models mentioned above with three-way decisions theory to get four new three-way decisions models. By extracting the rules, the redundant objects in the reduction sets are removed, and the decision error is further reduced. Then the optimal granularity selection results in two cases are obtained respectively by constructing the comprehensive score function and comprehensive accuracy function measurement formulas of each granularity of the reduction sets.

5.1. Three-Way Decisions Model Based on OOMRIFS

Suppose A i O is the reduction set under OMRS. According to reference [46], the expected loss function R O O ( ω | [ x ] A i O ) ( = P , B , N ) of object x can be obtained:
R O O ( ω P | [ x ] A i O ) = λ P P μ O O ( x ) + λ P N ν O O ( x ) + λ P B π O O ( x ) ; R O O ( ω N | [ x ] A i O ) = λ N P μ O O ( x ) + λ N N ν O O ( x ) + λ N B π O O ( x ) ; R O O ( ω B | [ x ] A i O ) = λ B P μ O O ( x ) + λ B N ν O O ( x ) + λ B B π O O ( x ) .
where
μ O O ( x ) = μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) , ν O O ( x ) = ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O ν E ( y ) , π O O ( x ) = 1 μ i = 1 r R A i O ¯ O ( E ) ( x ) ν i = 1 r R A i O ¯ O ( E ) ( x ) ;
or
μ O O ( x ) = μ i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) , ν O O ( x ) = ν i = 1 r R A i O ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) , π O O ( x ) = 1 μ i = 1 r R A i O ¯ O ( E ) ( x ) ν i = 1 r R A i O ¯ O ( E ) ( x ) .
The minimum-risk decision rules derived from the Bayesian decision process are as follows:
( P ) : If R ( ω P | [ x ] A i O ) R ( ω B | [ x ] A i O ) and R ( ω P | [ x ] A i O ) R ( ω N | [ x ] A i O ) , then x P O S ( X ) ;
( N ) : If R ( ω N | [ x ] A i O ) R ( ω P | [ x ] A i O ) and R ( ω N | [ x ] A i O ) R ( ω B | [ x ] A i O ) , then x N E G ( X ) ;
( B ) : If R ( ω B | [ x ] A i O ) R ( ω N | [ x ] A i O ) and R ( ω B | [ x ] A i O ) R ( ω P | [ x ] A i O ) , then x B N D ( X ) .
Thus, the decision rules ( P ) - ( B ) can be re-expressed concisely as:
( P ) rule satisfies:
( μ O O ( x ) ( 1 π O O ( x ) ) λ N N λ P N ( λ P P λ N P ) + ( λ P N λ N N ) ) ( μ O O ( x ) ( 1 π O O ( x ) ) λ B N λ P N ( λ P P λ B P ) + ( λ P N λ B N ) ) ;  
( N ) rule satisfies:
( μ O O ( x ) < ( 1 π O O ( x ) ) λ P N λ N N ( λ N P λ P P ) + ( λ P N λ N N ) ) ( μ O O ( x ) < ( 1 π O O ( x ) ) λ B N λ N N ( λ N P λ B P ) + ( λ B N λ N N ) ) ;  
( B ) rule satisfies:
( μ O O ( x ) > ( 1 π O O ( x ) ) λ B N λ P N ( λ P N λ B N ) + ( λ B P λ P P ) ) ( μ O O ( x ) ( 1 π O O ( x ) ) λ B N λ N N ( λ B N λ N N ) + ( λ N P λ B P ) ) .  
Therefore, the three-way decisions rules based on OOMRIFS are as follows:
(P1): If μ O O ( x ) ( 1 π O O ( x ) ) α , then x P O S ( X ) ;
(N1): If μ O O ( x ) ( 1 π O O ( x ) ) β , then x N E G ( X ) ;
(B1): If ( 1 π O O ( x ) ) β μ O O ( x ) and μ O O ( x ) ( 1 π O O ( x ) ) α , then x B N D ( X ) .

5.2. Three-Way Decisions Model Based on OIMRIFS

Suppose A i O is the reduction set under OMRS. According to reference [46], the expected loss functions R O O ( ω | [ x ] A i O ) ( = P , B , N ) of an object x are presented as follows:
R O I ( ω P | [ x ] A i O ) = λ P P μ O I ( x ) + λ P N ν O I ( x ) + λ P B π O I ( x ) ; R O I ( ω N | [ x ] A i O ) = λ N P μ O I ( x ) + λ N N ν O I ( x ) + λ N B π O I ( x ) ; R O I ( ω B | [ x ] A i O ) = λ B P μ O I ( x ) + λ B N ν O I ( x ) + λ B B π O I ( x ) .
where
μ O I ( x ) = μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O μ E ( y ) , ν O I ( x ) = v i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O v E ( y ) , π O I ( x ) = 1 μ i = 1 r R A i O ¯ I ( E ) ( x ) v i = 1 r R A i O ¯ I ( E ) ( x ) ;
or
μ O I ( x ) = μ i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i O μ E ( y ) , ν O I ( x ) = ν i = 1 r R A i O ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i O ν E ( y ) , π O I ( x ) = 1 μ i = 1 r R A i O ¯ I ( E ) ( x ) ν i = 1 r R A i O ¯ I ( E ) ( x ) .
Therefore, the three-way decisions rules based on OIMRIFS are as follows:
(P2): If μ O I ( x ) ( 1 π O I ( x ) ) α , then x P O S ( X ) ;
(N2): If μ O I ( x ) ( 1 π O I ( x ) ) β , then x N E G ( X ) ;
(B2): If ( 1 π O I ( x ) ) β μ O I ( x ) and μ O I ( x ) ( 1 π O I ( x ) ) α , then x B N D ( X ) .

5.3. Three-Way Decisions Model Based on IOMRIFS

Suppose A i I is the reduction set under IMRS. According to reference [46], the expected loss functions R I O ( ω | [ x ] A i I ) ( = P , B , N ) of an object x are as follows:
R I O ( ω P | [ x ] A i I ) = λ P P μ I O ( x ) + λ P N ν I O ( x ) + λ P B π I O ( x ) ; R I O ( ω N | [ x ] A i I ) = λ N P μ I O ( x ) + λ N N ν I O ( x ) + λ N B π I O ( x ) ; R I O ( ω B | [ x ] A i I ) = λ B P μ I O ( x ) + λ B N ν I O ( x ) + λ B B π I O ( x ) .
where
μ I O ( x ) = μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) , ν I O ( x ) = ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) , π I O ( x ) = 1 μ i = 1 r R A i I ¯ O ( E ) ( x ) ν i = 1 r R A i I ¯ O ( E ) ( x ) ;
or
μ I O ( x ) = μ i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) , ν I O ( x ) = ν i = 1 r R A i I ¯ O ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) , π I O ( x ) = 1 μ i = 1 r R A i I ¯ O ( E ) ( x ) ν i = 1 r R A i I ¯ O ( E ) ( x ) .
Therefore, the three-way decisions rules based on IOMRIFS are as follows:
(P3): If μ I O ( x ) ( 1 π I O ( x ) ) α , then x P O S ( X ) ;
(N3): If μ I O ( x ) ( 1 π I O ( x ) ) β , then x N E G ( X ) ;
(B3): If ( 1 π I O ( x ) ) β μ I O ( x ) and μ I O ( x ) ( 1 π I O ( x ) ) α , then x B N D ( X ) .

5.4. Three-Way Decisions Model Based on IIMRIFS

Suppose A i I is the reduction set under IMRS. Like Section 5.1, the expected loss functions R I I ( ω | [ x ] A i I ) ( = P , B , N ) of an object x are as follows:
R I I ( ω P | [ x ] A i I ) = λ P P μ I I ( x ) + λ P N ν I I ( x ) + λ P B π I I ( x ) ;  
R I I ( ω N | [ x ] A i I ) = λ N P μ I I ( x ) + λ N N ν I I ( x ) + λ N B π I I ( x ) ;  
R I I ( ω B | [ x ] A i I ) = λ B P μ I I ( x ) + λ B N ν I I ( x ) + λ B B π I I ( x ) .  
where
μ I I ( x ) = μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I μ E ( y ) , ν I I ( x ) = ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I ν E ( y ) , π I I ( x ) = 1 μ i = 1 r R A i I ¯ I ( E ) ( x ) ν i = 1 r R A i I ¯ I ( E ) ( x ) ;
or
μ I I ( x ) = μ i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r sup y [ x ] A i I μ E ( y ) , ν I I ( x ) = ν i = 1 r R A i I ¯ I ( E ) ( x ) = i = 1 r inf y [ x ] A i I ν E ( y ) , π I I ( x ) = 1 μ i = 1 r R A i I ¯ I ( E ) ( x ) ν i = 1 r R A i I ¯ I ( E ) ( x ) .
Therefore, the three-way decisions rules based on IIMRIFS are captured as follows:
(P4): If μ I I ( x ) ( 1 π I I ( x ) ) α , then x P O S ( X ) ;
(N4): If μ I I ( x ) ( 1 π I I ( x ) ) β , then x N E G ( X ) ;
(B4): If ( 1 π I I ( x ) ) β μ I I ( x ) and μ I I ( x ) ( 1 π I I ( x ) ) α , then x B N D ( X ) .
By constructing the above three decision models, the redundant objects in the reduction sets can be removed, which is beneficial to the optimal granular selection.

5.5. Comprehensive Measuring Methods of Granularity

Definition 17
([40]). Let an intuitionistic fuzzy number E ˜ ( f 1 ) = ( μ E ˜ ( f 1 ) , ν E ˜ ( f 1 ) ) , f 1 U , then the score function of E ˜ ( f 1 ) is calculated as:
S ( E ˜ ( f 1 ) ) = μ E ˜ ( f 1 ) ν E ˜ ( f 1 ) .  
The accuracy function of E ˜ ( f 1 ) is defined as:
H ( E ˜ ( f 1 ) ) = μ E ˜ ( f 1 ) + ν E ˜ ( f 1 ) .  
where 1 S ( E ˜ ( f 1 ) ) 1 and 0 H ( E ˜ ( f 1 ) ) 1 .
Definition 18.
Let D I S = ( U , C D ) be a decision information system, A = { A 1 ,   A 2 , ,   A m } are m sub-attributes of C. Suppose E are IFS on the universe U = { x 1 , x 2 , , x n } , defined by μ A i ( x j ) and ν A i ( x j ) , where μ A i ( x j ) and ν A i ( x j ) are their membership and non-membership functions respectively. | [ x j ] A i | is the number of equivalence classes of xj on granularity Ai, U / D = { X 1 , X 2 , , X s } is the partition induced by the decision attributes D. Then, the comprehensive score function of granularity Ai is captured as:
C S F A i ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) ν A i ( x j ) | | [ x j ] A i | .  
The comprehensive accuracy function of granularity Ai is captured as:
C A F A i ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) + ν A i ( x j ) | | [ x j ] A i | .  
where 1 C S F A i ( E ) 1 and 0 C A F A i ( E ) 1 .
With respect to Definition 19, according to references [27,39], we can deduce the following rules.
Definition 19.
Let two granularities A1, A2, then we have:
(1) 
If C S F A 1 ( E ) > C S F A 2 ( E ) , then A2 is smaller than A1, expressed as A 1 > A 2 ;
(2) 
If C S F A 1 ( E ) < C S F A 2 ( E ) , then A1 is smaller than A2, expressed as A 1 < A 2 ;
(3) 
If C S F A 1 ( E ) = C S F A 2 ( E ) , then
(i) 
If C S F A 1 ( E ) = C S F A 2 ( E ) , then A2 is equal to A1, expressed as A 1 = A 2 ;
(ii) 
If C S F A 1 ( E ) > C S F A 2 ( E ) , then A2 is smaller than A1, expressed as A 1 > A 2 ;
(iii) 
If C S F A 1 ( E ) < C S F A 2 ( E ) , then A1 is smaller than A2, expressed as A 1 < A 2 .

5.6. Optimal Granularity Selection Algorithm to Derive Three-Way Decisions from MRIFS

Suppose the reduction sets of optimistic and IMRS are A i O and A i I respectively. In this section, we take the reduction set under OMRS as an example to make the result A i O of optimal granularity selection.
Algorithm 2. Optimal granularity selection algorithm to derive three-way decisions from MRIFS
Input: D I S = ( U , C D , V , f ) , A = { A 1 ,   A 2 , ,   A m } be m sub-attributes of condition attributes C, A i A , U / D = { X 1 , X 2 , , X s } , IFS E;
Output: Optimal granularity selection result A i O .
1: compute via Algorithm 1;
2: if | A i O | > 1
3:  for A i A i O
4:   compute μ i = 1 r R A i O ¯ Δ ( E ) ( x j ) , ν i = 1 r R A i O ¯ Δ ( E ) ( x j ) , μ i = 1 r R A i O ¯ Δ ( E ) ( x j ) and ν i = 1 r R A i O ¯ Δ ( E ) ( x j ) ;
5:   according (P1)-(B1) and (P2)-(B2), compute P O S ( X O Δ ¯ ) , N E G ( X O Δ ¯ ) , B N D ( X O Δ ¯ ) , P O S ( X O Δ ¯ ) , N E G ( X O Δ ¯ ) , B N D ( X O Δ ¯ ) ;
6:   if N E G ( X O Δ ¯ ) U or N E G ( X O Δ ¯ ) U
7:     compute U / A i O Δ ¯ ,   C S F A i O Δ ¯ ( E ) ,   C A F A i O Δ ¯ ( E ) or ( U / A i O Δ ¯ ) ,   ( C S F A i O Δ ¯ ( E ) ,   C A F A i O Δ ¯ ( E ) ;
8:     according to Definition 19 to get A i O ;
9:     return A i O = A i ;
10:  end
11:  else
12:    return NULL;
13:  end
14: end
15: end
16: else
17: return A i O = A i O ;
18: end

6. Example Analysis 3 (Continued with Example 2)

In Example 1, only site 1 can be ignored under optimistic and pessimistic multi-granulation conditions, so it can be determined that site 1 does not need to be evaluated, while sites 2 and 3 need to be further investigated under the environment of optimistic multi-granulation. At the same time, with respect to the environment of pessimistic multi-granulation, comprehensive considera- tion site 3 can ignore the assessment and sites 2, 4 and 5 need to be further investigated.
According to Example 1, we can get that the reduction set of OMRS is { A 2 , A 3 } , but in the case of IMRS, there are two reduction sets, which are contradictory. Therefore, two reduction sets should be reconsidered simultaneously, so the joint reduction set under IMRS is { A 2 , A 4 , A 5 } .
Where the corresponding granularity structures of sites 2, 3, 4 and 5 are divided as follows:
  • U / A 2 = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 8 , x 9 } , { x 10 } } ,
  • U / A 3 = { { x 1 , x 4 , x 6 } , { x 2 , x 3 , x 5 } , { x 8 } , { x 7 , x 9 , x 10 } } ,
  • U / A 4 = { { x 1 , x 2 , x 3 , x 5 } , { x 4 } , { x 6 , x 7 , x 8 } , { x 9 , x 10 } } ,
  • U / A 5 = { { x 1 , x 3 , x 4 , x 6 } , { x 2 , x 7 } , { x 5 , x 8 } , { x 9 , x 10 } } .
According to reference [11], we can get:
α = 8 2 ( 8 2 ) + ( 2 0 ) = 0.75 ; β = 2 0 ( 2 0 ) + ( 6 2 ) = 0.33.
The optimal site selection process under optimistic and IMRS is as follows:
(1) Optimal site selection based on OOMRIFS
According to the Example 2, we can get the values of evaluation functions μ O O ¯ ( x j ) , ( 1 π O O ¯ ( x j ) ) α , ( 1 π O O ¯ ( x j ) ) β , μ O O ¯ ( x j ) , ( 1 π O O ¯ ( x j ) ) α and ( 1 π O O ¯ ( x j ) ) β of OOMRIFS, as shown in Table 4.
We can get decision results of the lower and upper approximations of OOMRIFS by three-way decisions of the Section 5.1, as follows:
P O S ( X O O ¯ ) = ϕ ,
N E G ( X O O ¯ ) = { x 1 , x 4 , x 7 , x 8 , x 9 } ,
B N D ( X O O ¯ ) = { x 2 , x 3 , x 5 , x 6 , x 10 } ;
P O S ( X O O ¯ ) = { x 6 , x 9 } ,
N E G ( X O O ¯ ) = { x 8 } ,
B N D ( X O O ¯ ) = { x 2 , x 3 , x 5 } .
In the light of three-way decisions rules based on OOMRIFS, after getting rid of the objects in the rejection domain, we choose to fuse the objects in the delay domain with those in the acceptance domain for the optimal granularity selection. Therefore, the new granularities A2, A3 are as follows:
U / A 2 O I ¯ = { { x 2 } , { x 3 , x 5 } , { x 6 } , { x 10 } } ,
U / A 3 O I ¯ = { { x 2 , x 3 , x 5 } , { x 6 } , { x 10 } } ;
U / A 2 O I ¯ = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 9 } , { x 10 } } ,
U / A 3 O I ¯ = { { x 1 , x 4 , x 6 } , { x 2 , x 3 , x 5 } , { x 7 , x 9 , x 10 } } .
Then, according to Definition 18, we can get:
C S F A 2 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) ν A i ( x j ) | | [ x j ] A i | = 1 4 × j = 1 , n [ x j ] A 2 O O ¯ 10 | μ A 2 O O ¯ ( x j ) ν A 2 O O ¯ ( x j ) | | [ x j ] A 2 O O ¯ | = 1 4 × ( ( 0.49 0.38 ) + ( 0.49 0.38 ) + ( 0.49 0.38 ) 2 + ( 0.25 0.46 ) + ( 0.67 0.23 ) ) = 0.1125 ,
C S F A 3 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) ν A i ( x j ) | | [ x j ] A i | = 1 3 × j = 1 , n [ x j ] A 3 O O ¯ 10 | μ A 3 O O ¯ ( x j ) ν A 3 O O ¯ ( x j ) | | [ x j ] A 3 O O ¯ | = 1 3 × ( ( 0.25 0.46 ) + ( 0.49 0.38 ) + ( 0.49 0.38 ) + ( 0.49 0.38 ) 3 + ( 0.81 0.14 ) ) = 0.1133 ;
Similarly, we have:
C S F A 2 O O ¯ ( E ) = 0.4 , C S F A 3 O O ¯ ( E ) = 0.3533.
From the above results, in OOMRIFS, we can see that we can’t get the selection result of sites 2 and 3 only according to the comprehensive score function of granularities A2 and A3. Therefore, we need to further calculate the comprehensive accuracies to get the results as follows:
C A F A 2 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) + ν A i ( x j ) | | [ x j ] A i | = 1 4 × j = 1 , n [ x j ] A 2 O O ¯ 10 | μ A 2 O O ¯ ( x j ) + ν A 2 O O ¯ ( x j ) | | [ x j ] A 2 O O ¯ | = 1 4 × ( ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) 2 + ( 0.25 + 0.46 ) + ( 0.67 + 0.23 ) ) = 0.8375 ,
C A F A 3 O O ¯ ( E ) = 1 s × j = 1 , n [ x j ] A i n | μ A i ( x j ) + ν A i ( x j ) | | [ x j ] A i | = 1 3 × j = 1 , n [ x j ] A 3 O O ¯ 10 | μ A 3 O O ¯ ( x j ) + ν A 3 O O ¯ ( x j ) | | [ x j ] A 3 O O ¯ | = 1 3 × ( ( 0.25 + 0.46 ) + ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) + ( 0.49 + 0.38 ) 3 + ( 0.81 + 0.14 ) ) = 0.8267 ;
Analogously, we have:
C A F A 2 O O ¯ ( E ) = 0.87 , C A F A 3 O O ¯ ( E ) = 0.86.
Through calculation above, we know that the comprehensive accuracy of the granularity A3 is higher, so the site 3 is selected as the selection result.
(2) Optimal site selection based on OIMRIFS
The same as (1), we can get the values of evaluation functions μ O I ¯ ( x j ) , ( 1 π O I ¯ ( x j ) ) α , ( 1 π O I ¯ ( x j ) ) β , μ O I ¯ ( x j ) , ( 1 π O I ¯ ( x j ) ) α and ( 1 π O I ¯ ( x j ) ) β of OIMRIFS listed in Table 5.
We can get decision results of the lower and upper approximations of OIMRIFS by three-way decisions in the Section 5.2, as follows:
P O S ( X O I ¯ ) = ϕ ,
N E G ( X O I ¯ ) = U ,
B N D ( X O I ¯ ) = ϕ ;
P O S ( X O I ¯ ) = { x 1 , x 4 , x 6 , x 7 , x 8 , x 9 , x 10 } ,
N E G ( X O I ¯ ) = ϕ ,
B N D ( X O I ¯ ) = { x 2 , x 3 , x 5 } .
Hence, in the upper approximations of OIMRIFS, the new granularities A2, A3 are as follows:
U / A 2 O I ¯ = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 8 , x 9 } , { x 10 } } ,
U / A 3 O I ¯ = { { x 1 , x 4 , x 6 } , { x 2 , x 3 , x 5 } , { x 8 } , { x 7 , x 9 , x 10 } } .
According to Definition 18, we can calculate that
C S F A 2 O I ¯ ( E ) = C S F A 3 O I ¯ ( E ) = 0 ;
C A F A 2 O I ¯ ( E ) = C A F A 3 O I ¯ ( E ) = 0 ;
C S F A 2 O I ¯ ( E ) = 0.6317 , C S F A 3 O I ¯ ( E ) = 0.6783 ;
C A F A 2 O I ¯ ( E ) = 0.885 , C A F A 3 O I ¯ ( E ) = 0.905.
In OIMRIFS, the comprehensive score and comprehensive accuracy of the granularity A3 are both higher than the granularity A2. So, we choose site 3 as the evaluation site.
In reality, we are more inclined to select the optimal granularity in the case of more stringent requirements. According to (1) and (2), we can find that the granularity A3 is a better choice when the requirements are stricter in four cases of OMRS. Therefore, we choose site 3 as the optimal evaluation site.
(3) Optimal site selection based on IOMRIFS
Similar to (1), we can obtain the values of evaluation functions μ I O ¯ ( x j ) , ( 1 π I O ¯ ( x j ) ) α , ( 1 π I O ¯ ( x j ) ) β , μ I O ¯ ( x j ) , ( 1 π I O ¯ ( x j ) ) α and ( 1 π I O ¯ ( x j ) ) β of IOMRIFS, as described in Table 6.
We can get decision results of the lower and upper approximations of IOMRIFS by three-way decisions in the Section 5.3, as follows:
P O S ( X I O ¯ ) = ϕ ,
N E G ( X I O ¯ ) = { x 7 , x 8 } ,
B N D ( X I O ¯ ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 9 , x 10 } ;
P O S ( X I O ¯ ) = { x 6 , x 9 } ,
N E G ( X I O ¯ ) = ϕ ,
B N D ( X I O ¯ ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 7 , x 8 , x 10 } .
Therefore, the granularities A2, A4, A5 can be rewritten as follows:
U / A 2 I O ¯ = { { x 1 , x 2 , x 4 } , { x 3 , x 5 } , { x 6 , x 9 } , { x 10 } } ,
U / A 4 I O ¯ = { { x 1 , x 2 , x 3 , x 5 } , { x 4 } , { x 6 } , { x 9 , x 10 } } ,
U / A 5 I O ¯ = { { x 1 , x 3 , x 4 , x 6 } , { x 2 } , { x 5 } , { x 9 , x 10 } } ;
U / A 2 I O ¯ = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 8 , x 9 } , { x 10 } } ,
U / A 4 I O ¯ = { { x 1 , x 2 , x 3 , x 5 } , { x 4 } , { x 6 , x 7 , x 8 } , { x 9 , x 10 } } ,
U / A 5 I O ¯ = { { x 1 , x 3 , x 4 , x 6 } , { x 2 , x 7 } , { x 5 , x 8 } , { x 9 , x 10 } } .
According to Definition 18, one can see that the results are captured as follows:
C S F A 2 I O ¯ ( E ) = 0.0454 , C S F A 4 I O ¯ ( E ) = 0.0567 , C S F A 5 I O ¯ ( E ) = 0.0294 ;
C S F A 2 I O ¯ ( E ) = 0.3058 , C S F A 4 I O ¯ ( E ) = 0.2227 , C S F A 5 I O ¯ ( E ) = 0.2813.
In summary, the comprehensive score function of the granularity A2 is higher than the granularity A3 in IOMRIFS, so we choose site 2 as the result of granularity selection.
(4) Optimal site selection based on IIMRIFS
In the same way as (1), we can get the values of evaluation functions μ I I ¯ ( x j ) , ( 1 π I I ¯ ( x j ) ) α , ( 1 π I I ¯ ( x j ) ) β , μ I I ¯ ( x j ) , ( 1 π I I ¯ ( x j ) ) α and ( 1 π I I ¯ ( x j ) ) β of IIMRIFS, as shown in Table 7.
We can get decision results of the lower and upper approximations of IIMRIFS by three-way decisions in the Section 5.4, as follows:
P O S ( X I I ¯ ) = ϕ ,
N E G ( X I I ¯ ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 } ,
B N D ( X I I ¯ ) = { x 10 } ;
P O S ( X I I ¯ ) = { x 1 , x 3 , x 4 , x 6 , x 7 , x 8 , x 9 , x 10 } ,
N E G ( X I I ¯ ) = ϕ ,
B N D ( X I I ¯ ) = { x 2 , x 5 } .
Therefore, the granularity structures of A2, A4, A5 can be rewritten as follows:
U / A 2 I I ¯ = U / A 4 I I ¯ = U / A 5 I I ¯ = { x 10 } ;
U / A 2 I I ¯ = { { x 1 , x 2 , x 4 } , { x 3 , x 5 , x 7 } , { x 6 , x 8 , x 9 } , { x 10 } } ,
U / A 4 I I ¯ = { { x 1 , x 2 , x 3 , x 5 } , { x 4 } , { x 6 , x 7 , x 8 } , { x 9 , x 10 } } ,
U / A 5 I I ¯ = { { x 1 , x 3 , x 4 , x 6 } , { x 2 , x 7 } , { x 5 , x 8 } , { x 9 , x 10 } } .
According to Definition 18, one can see that the results are captured as follows:
C S F A 2 I I ¯ ( E ) = C S F A 4 I I ¯ ( E ) = C S F A 5 I I ¯ ( E ) = 0.44 ;
C A F A 2 I I ¯ ( E ) = C A F A 4 I I ¯ ( E ) = C A F A 5 I I ¯ ( E ) = 0.9 ;
C S F A 2 I I ¯ ( E ) = 0.7067 , C S F A 4 I I ¯ ( E ) = 0.7675 , C S F A 5 I I ¯ ( E ) = 0.69 ;
C A F A 2 I I ¯ ( E ) = 0.9067 ,   C A F A 4 I I ¯ ( E ) = 0.9275 , C A F A 5 I I ¯ ( E ) = 0.91.
In IIMRIFS, the values of the comprehensive score and comprehensive accuracy of granularity A4 are higher than A2 and A5, so site 4 is chosen as the evaluation site.
Considering (3) and (4) synthetically, we find that the results of granularity selection in IOMRIFS and IIMRIFS are inconsistent, so we need to further compute the comprehensive accuracies of IIMRIFS.
C A F A 2 I O ¯ ( E ) = 0.7896 , C A F A 4 I O ¯ ( E ) = 0.8125 , C A F A 5 I O ¯ ( E ) = 0.7544 ;
C A F A 2 I O ¯ ( E ) = 0.8725 , C A F A 4 I O ¯ ( E ) = 0.886 ,   C A F A 5 I O ¯ ( E ) = 0.8588.
Through the above calculation results, we can see that the comprehensive score and comprehensive accuracy of granularity A4 are higher than A2 and A5 in the case of pessimistic multi- granulation when the requirements are stricter. Therefore, the site 4 is eventually chosen as the optimal evaluation site.

7. Conclusions

In this paper, we propose two new granularity importance degree calculating methods among multiple granularities, and a granularity reduction algorithm is further developed. Subsequently, we design four novel MRIFS models based on reduction sets under optimistic and IMRS, i.e., OOMRIFS, OIMRIFS, IOMRIFS, and IIMRIFS, and further demonstrate their relevant properties. In addition, four three-way decisions models with novel MRIFS for the issue of internal redundant objects in reduction sets are constructed. Finally, we designe the comprehensive score function and the comprehensive precision function for the optimal granularity selection results. Meanwhile, the validity of the proposed models is verified by algorithms and examples. The works of this paper expand the application scopes of MRIFS and three-way decisions theory, which can solve issues such as spam e-mail filtering, risk decision, investment decisions, and so on. A question worth considering is how to extend the methods of this article to fit the big data environment. Moreover, how to combine the fuzzy methods based on triangular or trapezoidal fuzzy numbers with the methods proposed in this paper is also a research problem. These issues will be investigated in our future work.

Author Contributions

Z.-A.X. and D.-J.H. initiated the research and wrote the paper, M.-J.L. participated in some of these search work, and M.Z. supervised the research work and provided helpful suggestions.

Funding

This research received no external funding.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant Nos. 61772176, 61402153, and the Scientific And Technological Project of Henan Province of China under Grant Nos. 182102210078, 182102210362, and the Plan for Scientific Innovation of Henan Province of China under Grant No. 18410051003, and the Key Scientific And Technological Project of Xinxiang City of China under Grant No. CXGG17002.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  2. Pawlak, Z.; Skowron, A. Rough sets: some extensions. Inf. Sci. 2007, 177, 28–40. [Google Scholar] [CrossRef]
  3. Yao, Y.Y. Probabilistic rough set approximations. Int. J. Approx. Reason. 2008, 49, 255–271. [Google Scholar] [CrossRef]
  4. Slezak, D.; Ziarko, W. The investigation of the Bayesian rough set model. Int. J. Approx. Reason. 2005, 40, 81–91. [Google Scholar] [CrossRef]
  5. Ziarko, W. Variable precision rough set model. J. Comput. Syst. Sci. 1993, 46, 39–59. [Google Scholar] [CrossRef]
  6. Zhu, W. Relationship among basic concepts in covering-based rough sets. Inf. Sci. 2009, 179, 2478–2486. [Google Scholar] [CrossRef]
  7. Ju, H.R.; Li, H.X.; Yang, X.B.; Zhou, X.Z. Cost-sensitive rough set: A multi-granulation approach. Knowl.-Based Syst. 2017, 123, 137–153. [Google Scholar] [CrossRef]
  8. Qian, Y.H.; Liang, J.Y.; Dang, C.Y. Incomplete multi-granulation rough set. IEEE Trans. Syet. Man Cybern. A 2010, 40, 420–431. [Google Scholar] [CrossRef]
  9. Qian, Y.H.; Liang, J.Y.; Yao, Y.Y.; Dang, C.Y. MGRS: A multi-granulation rough set. Inf. Sci. 2010, 180, 949–970. [Google Scholar] [CrossRef]
  10. Yang, X.B.; Qi, Y.S.; Song, X.N.; Yang, J.Y. Test cost sensitive multigranulation rough set: model and mini-mal cost selection. Inf. Sci. 2013, 250, 184–199. [Google Scholar] [CrossRef]
  11. Zhang, W.X.; Mi, J.S.; Wu, W.Z. Knowledge reductions in inconsistent information systems. Chinese J. Comput. 2003, 26, 12–18. (In Chinese) [Google Scholar]
  12. Sang, Y.L.; Qian, Y.H. Granular structure reduction approach to multigranulation decision-theoretic rough sets. Comput. Sci. 2017, 44, 199–205. (In Chinese) [Google Scholar]
  13. Jing, Y.G.; Li, T.R.; Fujita, H.; Yu, Z.; Wang, B. An incremental attribute reduction approach based on knowledge granularity with a multi-granulation view. Inf. Sci. 2017, 411, 23–38. [Google Scholar] [CrossRef]
  14. Feng, T.; Fan, H.T.; Mi, J.S. Uncertainty and reduction of variable precision multigranulation fuzzy rough sets based on three-way decisions. Int. J. Approx. Reason. 2017, 85, 36–58. [Google Scholar] [CrossRef]
  15. Tan, A.H.; Wu, W.Z.; Tao, Y.Z. On the belief structures and reductions of multigranulation spaces with decisions. Int. J. Approx. Reason. 2017, 88, 39–52. [Google Scholar] [CrossRef]
  16. Kang, Y.; Wu, S.X.; Li, Y.W.; Liu, J.H.; Chen, B.H. A variable precision grey-based multi-granulation rough set model and attribute reduction. Knowl.-Based Syst. 2018, 148, 131–145. [Google Scholar] [CrossRef]
  17. Xu, W.H.; Li, W.T.; Zhang, X.T. Generalized multigranulation rough sets and optimal granularity selection. Granul. Comput. 2017, 2, 271–288. [Google Scholar] [CrossRef]
  18. Atanassov, K.T. More on intuitionistic fuzzy sets. Fuzzy Set Syst. 1989, 33, 37–45. [Google Scholar] [CrossRef]
  19. Atanassov, K.T.; Rangasamy, P. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  20. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, X.H. Fuzzy anti-grouped filters and fuzzy normal filters in pseudo-BCI algebras. J. Intell. Fuzzy Syst. 2017, 33, 1767–1774. [Google Scholar] [CrossRef]
  22. Huang, B.; Guo, C.X.; Zhang, Y.L.; Li, H.X.; Zhou, X.Z. Intuitionistic fuzzy multi-granulation rough sets. Inf. Sci. 2014, 277, 299–320. [Google Scholar] [CrossRef]
  23. Liu, C.H.; Pedrycz, W. Covering-based multi-granulation fuzzy rough sets. J. Intell. Fuzzy Syst. 2016, 30, 303–318. [Google Scholar] [CrossRef]
  24. Xue, Z.A.; Wang, N.; Si, X.M.; Zhu, T.L. Research on multi-granularity rough intuitionistic fuzzy cut sets. J. Henan Normal Univ. (Nat. Sci. Ed.) 2016, 44, 131–139. (In Chinese) [Google Scholar]
  25. Xue, Z.A.; Lv, M.J.; Han, D.J.; Xin, X.W. Multi-granulation graded rough intuitionistic fuzzy sets models based on dominance relation. Symmetry 2018, 10, 446. [Google Scholar] [CrossRef]
  26. Wang, J.Q.; Zhang, X.H. Two types of intuitionistic fuzzy covering rough sets and an application to multiple criteria group decision making. Symmetry 2018, 10, 462. [Google Scholar] [CrossRef]
  27. Boran, F.E.; Akay, D. A biparametric similarity measure on intuitionistic fuzzy sets with applications to pattern recognition. Inf. Sci. 2014, 255, 45–57. [Google Scholar] [CrossRef]
  28. Intarapaiboon, P. A hierarchy-based similarity measure for intuitionistic fuzzy sets. Soft Comput. 2016, 20, 1–11. [Google Scholar] [CrossRef]
  29. Ngan, R.T.; Le, H.S.; Cuong, B.C.; Mumtaz, A. H-max distance measure of intuitionistic fuzzy sets in decision making. Appl. Soft Comput. 2018, 69, 393–425. [Google Scholar] [CrossRef]
  30. Yao, Y.Y. The Superiority of Three-way decisions in probabilistic rough set models. Inf. Sci. 2011, 181, 1080–1096. [Google Scholar] [CrossRef]
  31. Yao, Y.Y. Three-way decisions with probabilistic rough sets. Inf. Sci. 2010, 180, 341–353. [Google Scholar] [CrossRef] [Green Version]
  32. Zhai, J.H.; Zhang, Y.; Zhu, H.Y. Three-way decisions model based on tolerance rough fuzzy set. Int. J. Mach. Learn. Cybern. 2016, 8, 1–9. [Google Scholar] [CrossRef]
  33. Zhai, J.H.; Zhang, S.F. Three-way decisions model based on rough fuzzy set. J. Intell. Fuzzy Syst. 2018, 34, 2051–2059. [Google Scholar] [CrossRef]
  34. Hao, C.; Li, J.H.; Fan, M.; Liu, W.Q.; Tsang, E.C.C. Optimal scale selection in dynamic multi-scale decision tables based on sequential three-way decisions. Inf. Sci. 2017, 415, 213–232. [Google Scholar] [CrossRef]
  35. Luo, C.; Li, T.R.; Huang, Y.Y.; Fujita, H. Updating three-way decisions in incomplete multi-scale information systems. Inf. Sci. 2018. [Google Scholar] [CrossRef]
  36. Zhang, X.H.; Bo, C.X.; Smarandache, F.; Dai, J.H. New inclusion relation of neutrosophic sets with applications and related lattice structure. Int. J. Mach. Learn. Cybern. 2018, 9, 1753–1763. [Google Scholar] [CrossRef]
  37. Zhang, Q.H.; Xie, Q.; Wang, G.Y. A novel three-way decision model with decision-theoretic rough sets using utility theory. Knowl.-Based Syst. 2018. [Google Scholar] [CrossRef]
  38. Yang, X.P.; Tan, A.H. Three-way decisions based on intuitionistic fuzzy sets. In Proceedings of the International Joint Conference on Rough Sets, Olsztyn, Poland, 3–7 July 2017. [Google Scholar]
  39. Liu, J.B.; Zhou, X.Z.; Huang, B.; Li, H.X. A three-way decision model based on intuitionistic fuzzy decision systems. In Proceedings of the International Joint Conference on Rough Sets, Olsztyn, Poland, 3–7 July 2017. [Google Scholar]
  40. Liang, D.C.; Liu, D. Deriving three-way decisions from intuitionistic fuzzy decision-theoretic rough sets. Inf. Sci. 2015, 300, 28–48. [Google Scholar] [CrossRef]
  41. Liang, D.C.; Xu, Z.S.; Liu, D. Three-way decisions with intuitionistic fuzzy decision-theoretic rough sets based on point operators. Inf. Sci. 2017, 375, 18–201. [Google Scholar] [CrossRef]
  42. Sun, B.Z.; Ma, W.M.; Li, B.J.; Li, X.N. Three-way decisions approach to multiple attribute group decision making with linguistic information-based decision-theoretic rough fuzzy set. Int. J. Approx. Reason. 2018, 93, 424–442. [Google Scholar] [CrossRef]
  43. Abdel-Basset, M.; Gunasekaran, M.; Mai, M.; Chilamkurti, N. Three-way decisions based on neutrosophic sets and AHP-QFD framework for supplier selection problem. Future Gener. Comput. Syst. 2018, 89. [Google Scholar] [CrossRef]
  44. Yu, H.; Zhang, C.; Wang, G.Y. A tree-based incremental overlapping clustering method using the three- way decision theory. Knowl.-Based Syst. 2016, 91, 189–203. [Google Scholar] [CrossRef]
  45. Li, J.H.; Huang, C.C.; Qi, J.J.; Qian, Y.H.; Liu, W.Q. Three-way cognitive concept learning via multi- granulation. Inf. Sci. 2017, 378, 244–263. [Google Scholar] [CrossRef]
  46. Xue, Z.A.; Zhu, T.L.; Xue, T.Y.; Liu, J. Methodology of attribute weights acquisition based on three-way decision theory. Comput. Sci. 2015, 42, 265–268. (In Chinese) [Google Scholar]
Figure 1. The lower and upper approximations of OOMRIFS and OIMRIFS.
Figure 1. The lower and upper approximations of OOMRIFS and OIMRIFS.
Symmetry 10 00662 g001
Figure 2. The lower and upper approximations of IOMRIFS and IIMRIFS.
Figure 2. The lower and upper approximations of IOMRIFS and IIMRIFS.
Symmetry 10 00662 g002
Table 1. Cost matrix of decision actions.
Table 1. Cost matrix of decision actions.
Decision ActionsDecision Functions
X X
ω P λ P P λ P N
ω B λ B P λ B N
ω N λ N P λ N N
Table 2. Internal importance degree of optimistic multi-granulation rough sets (OMRS).
Table 2. Internal importance degree of optimistic multi-granulation rough sets (OMRS).
A1A2A3A4A5
s i g i n O ( A i , A , D ) 00.150.0500.05
s i g i n , O ( A i , A , D ) 0.0250.3750.22500
Table 3. Internal importance degree of pessimistic multi-granulation rough sets (IMRS).
Table 3. Internal importance degree of pessimistic multi-granulation rough sets (IMRS).
A1A2A3A4A5
s i g i n I ( A i , A , D ) 00.05000
s i g i n , I ( A i , A , D ) 00.02500.0250.025
Table 4. The values of evaluation functions for OOMRIFS.
Table 4. The values of evaluation functions for OOMRIFS.
μ O O ¯ ( x j ) ( 1 π O O ¯ ( x j ) ) α   ( 1 π O O ¯ ( x j ) ) β μ O O ¯ ( x j ) ( 1 π O O ¯ ( x j ) ) α ( 1 π O O ¯ ( x j ) ) β
x10.250.630.27720.510.59250.2607
x20.490.65250.28710.510.59250.2607
x30.490.65250.28710.540.66750.2937
x40.250.630.27720.510.59250.2607
x50.490.65250.28710.540.66750.2937
x60.250.53250.23430.920.720.3168
x70.090.71250.31350.540.66750.2937
x80.150.45750.20130.150.45750.2013
x90.150.45750.20130.720.630.2772
x100.670.6750.2970.670.6750.297
Table 5. The values of evaluation functions for OIMRIFS.
Table 5. The values of evaluation functions for OIMRIFS.
μ O I ¯ ( x j )   ( 1 π O I ¯ ( x j ) ) α   ( 1 π O I ¯ ( x j ) ) β   μ O I ¯ ( x j )   ( 1 π O I ¯ ( x j ) ) α   ( 1 π O I ¯ ( x j ) ) β  
x10.250.630.27720.920.720.3168
x20.250.630.27720.540.6150.2706
x30.090.71250.31350.540.6150.2706
x40.250.630.27720.920.720.3168
x50.090.71250.31350.540.6150.2706
x60.150.5550.24420.920.720.3168
x70.090.71250.31350.720.630.2772
x80.150.45750.20130.920.720.3168
x90.090.71250.31350.920.720.3168
x100.090.71250.31350.720.630.2772
Table 6. The values of evaluation functions for IOMRIFS.
Table 6. The values of evaluation functions for IOMRIFS.
μ I O ¯ ( x j )   ( 1 π I O ¯ ( x j ) ) α   ( 1 π I O ¯ ( x j ) ) β   μ I O ¯ ( x j )   ( 1 π I O ¯ ( x j ) ) α   ( 1 π I O ¯ ( x j ) ) β  
x10.250.510.22440.510.59250.2607
x20.250.510.22440.510.59250.2607
x30.250.510.22440.540.66750.2937
x40.370.720.31680.370.720.3168
x50.250.510.22440.490.630.2772
x60.250.53250.23430.920.720.3168
x70.090.71250.31350.510.6450.2838
x80.150.45750.20130.490.630.2772
x90.670.6750.2970.720.630.2772
x100.670.6750.2970.670.6750.297
Table 7. The values of evaluation functions for IIMRIFS.
Table 7. The values of evaluation functions for IIMRIFS.
μ I I ¯ ( x j )   ( 1 π I I ¯ ( x j ) ) α   ( 1 π I I ¯ ( x j ) ) β   μ I I ¯ ( x j )   ( 1 π I I ¯ ( x j ) ) α   ( 1 π I I ¯ ( x j ) ) β  
x10.250.630.27720.920.720.3168
x20.090.71250.31350.540.6150.2706
x30.090.71250.31350.920.720.3168
x40.250.630.27720.920.720.3168
x50.090.71250.31350.540.6150.2706
x60.090.71250.31350.920.720.3168
x70.090.71250.31350.920.720.3168
x80.090.71250.31350.920.720.3168
x90.150.45750.20130.920.720.3168
x100.670.6750.2970.720.630.2772

Share and Cite

MDPI and ACS Style

Xue, Z.-A.; Han, D.-J.; Lv, M.-J.; Zhang, M. Novel Three-Way Decisions Models with Multi-Granulation Rough Intuitionistic Fuzzy Sets. Symmetry 2018, 10, 662. https://doi.org/10.3390/sym10110662

AMA Style

Xue Z-A, Han D-J, Lv M-J, Zhang M. Novel Three-Way Decisions Models with Multi-Granulation Rough Intuitionistic Fuzzy Sets. Symmetry. 2018; 10(11):662. https://doi.org/10.3390/sym10110662

Chicago/Turabian Style

Xue, Zhan-Ao, Dan-Jie Han, Min-Jie Lv, and Min Zhang. 2018. "Novel Three-Way Decisions Models with Multi-Granulation Rough Intuitionistic Fuzzy Sets" Symmetry 10, no. 11: 662. https://doi.org/10.3390/sym10110662

APA Style

Xue, Z. -A., Han, D. -J., Lv, M. -J., & Zhang, M. (2018). Novel Three-Way Decisions Models with Multi-Granulation Rough Intuitionistic Fuzzy Sets. Symmetry, 10(11), 662. https://doi.org/10.3390/sym10110662

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop