1. Introduction
In many decision-making problems, it has been traditionally supposed that all information is depicted in the form of crisp numbers. However, most of a decision makers’ assessment information is imprecise or uncertain [
1,
2,
3]. Hence, he or she can’t express his or her preferences using an exact numerical value [
4,
5,
6,
7]. In order to depict the qualitative assessment information easily, Herrera and Martinez [
8] gave the linguistic term sets (LTSs) for computing with words. Herrera and Martinez [
9] combined linguistic and numerical information on the basis of the two-tuple fuzzy linguistic representation model. Herrera and Martinez [
10] defined the linguistic two-tuples for handling multigranular hierarchical linguistic contexts. Dong and Herrera-Viedma [
11] tackled the consistency-driven automatic method to interval numerical scales of LTSs for linguistic GDM with preference relation. Recently, two-tuple linguistic processing model are extended to interval numbers [
12,
13], intuitionistic fuzzy sets [
14,
15,
16], hesitant fuzzy sets [
17,
18,
19,
20], and bipolar fuzzy sets [
21,
22]. Furthermore, Rodriguez, et al. [
23] defined the hesitant fuzzy linguistic term sets (HFLTSs) on the basis of hesitant fuzzy sets [
24] and linguistic term sets [
25] which allow DMs to provide several possible linguistic variable. However, in most of the current researches on HFLTSs, all possible values supplied by the DMs have equal weight or importance. Obviously, it is inconsistent with the reality. In both personal MADM and multiple attribute group decision making (MAGDM) problems, the DMs may offer possible linguistic terms so that these offered possible values may have different probability distributions. Thus, Pang, et al. [
26] proposed the probabilistic linguistic term sets (PLTSs) to overcome this defect and constructed a framework for ranking PLTSs with the score or deviation degree of each PLTS. Bai, et al. [
27] gave a comparison method and proposed a more efficient way to tackle PLTSs. Kobina, et al. [
28] proposed some probabilistic linguistic power operators for MAGDM with classical power aggregation operators [
29,
30,
31]. Liang, et al. [
32] developed the probabilistic linguistic grey relational analysis (PL-GRA) for MAGDM based on geometric Bonferroni mean [
33,
34,
35,
36]. Liao, et al. [
37] defined the linear programming method to deal with the MADM with probabilistic linguistic information. Lin, et al. [
38] proposed the ELECTRE II method to deal with PLTSs for edge computing. Liao, et al. [
39] studied the novel operations of PLTSs to solve the probabilistic linguistic ELECTRE III method. Feng, et al. [
40] proposed the probabilistic linguistic QUALIFLEX method with possibility degree comparison. Chen, et al. [
41] employed the probabilistic linguistic MULTIMOORA for cloud-based ERP system selection.
Entropy is a very important and efficient tool for measuring uncertain information. The fuzzy entropy was first defined by Zadeh [
42]. The beginning point for the cross entropy method is information theory as proposed by Shannon [
43]. Kullback and Leibler [
44] developed “cross entropy distance” measure between two probability distributions. Furtan [
45] studied the entropy theory in firm decision-making. Dhar, et al. [
46] investigated the investment decision making with entropy reduction. Yang and Qiu [
47] researched the decision-making method based on expected utility and entropy. Muley and Bajaj [
48] used the entropy-based approach to solve the fuzzy MADM. Xu and Hu [
49] proposed the entropy-based procedures for intuitionistic fuzzy MADM. Chen, et al. [
50] constructed concretely an interval-valued intuitionistic fuzzy entropy. Lotfi and Fallahnejad [
51] proposed the imprecise Shannon’s entropy in MADM. Khaleie and Fasanghari [
52] investigated the intuitionistic fuzzy MAGDM method by using entropy and association coefficient. Zhao, et al. [
53] extended the VIKOR method based on cross-entropy for interval-valued intuitionistic fuzzy MAGDM. Peng, et al. [
54] defined the cross-entropy for intuitionistic hesitant fuzzy sets in MADM. Tao, et al. [
55] developed the entropy measures for linguistic information in MAGDM. Song, et al. [
56] studied the MADM for dual uncertain information on the basis of grey incidence analysis and grey relative entropy optimization. Farhadinia and Xu [
57] tackled the hesitant fuzzy linguistic entropy and cross-entropy measures in MADM. Xue, et al. [
58] proposed the Pythagorean fuzzy LINMAP method with the entropy theory for railway project investment. Biswas and Sarkar [
59] developed the Pythagorean fuzzy TOPSIS for MAGDM with unknown weight information through entropy measure. Xiao [
60] gave the MADM model based on D-numbers and belief entropy. Xu and Luo [
61] defined the information entropy risk measure for large group decision-making method.
TOPSIS (Technique for order performance by similarity to ideal solution) method was initially proposed by Hwang and Yoon [
62] for solving a MADM problem, which concentrates on choosing the alternative with the smallest distance from the positive ideal solution (PIS) and with the longest distance from the negative ideal solution (NIS). In the recent years, many scholars indulged in MADM or MAGDM problem based on the TOPSIS method [
63,
64,
65,
66,
67,
68]. The goal of this paper is to extend the TOPSIS method to solve the probabilistic linguistic MAGDM with unknown weight information on the basis of the information entropy. The innovativeness of the paper can be summarized as follows: (1) the TOPSIS method is extended by PLTSs with unknown weight information; (2) the probabilistic linguistic TOPSIS (PL-TOPSIS) method is proposed to solve the probabilistic linguistic MAGDM problems with entropy weight; (3) a case study for supplier selection of new agricultural machinery products is supplied to show the developed approach; and (4) some comparative studies are provided with the probabilistic linguistic weighted average (PLWA) operator and the PL-GRA method to give effect to the rationality of PL-TOPSIS method.
The remainder of this paper is set out as follows.
Section 2 supplies some basic concepts of PLTSs. In
Section 3, the TOPSIS method is proposed for probabilistic linguistic MAGDM problems with entropy weight. In
Section 4, a case study for supplier selection of new agricultural machinery products is given and some comparative analysis is conducted. The study finishes with some conclusions in
Section 5.
2. Preliminaries
In this section, we review some concepts and operations related to linguistic terms sets and PLTSs.
Definition 1. ([
26])
Let be an LTS, the linguistic terms can express the equivalent information to which is expressed with the transformation function : can also be expressed the equivalent information to the linguistic terms which is denoted with the transformation function : Pang, Wang and Xu [
26] proposed a novel concept called probabilistic linguistic term sets to depict qualitative information.
Definition 2. ([
26])
Given an LTS , a PLTS is defined as:where is the linguistic term associated with the probability value , and is the length of linguistic terms in . The linguistic term in are arranged in ascending order. In order to easy computation, Pang, Wang and Xu [
26] normalized the PLTS
as
, where
for all
.
Moreover, in the process of MAGDM issue, the numbers of linguistic terms in PLTSs are always different, which brings great trouble to calculate. In this case, we need to increase the numbers of linguistic terms for the PLTSs in which the numbers of linguistic terms are relatively small, so that they have the same number of linguistic terms.
Definition 3. ([
26])
Let be an LTS, and be two PLTSs, where and are the numbers of PLTS and , respectively. If , then add linguistic terms to . Moreover, the added linguistic terms should be the smallest linguistic term in and the probabilities of added linguistic terms should be zero. After we have defined the concept of PLTSs, we need to propose a method to compare these PLTSs. In order to do so, in the following, we first define the score and deviation degree of PLTSs.
Definition 4. ([
26])
For a PLTS , the expected value and deviation degree of is defined:By using the Equations (4) and (5), the order relation between two PLTSs is defined as: (1) if , then ; and (2) if , then if , then ; if , then, .
Definition 5. ([
69])
Let be an LTS. And let and be two PLTSs with , then Hamming distance between and is defined as follows: 3. TOPSIS Method for Probabilistic Linguistic MAGDM with Entropy Weight
In this section, we propose a novel probabilistic linguistic TOPSIS method for MAGDM problems with unknown weight information. The following notations are used to solve the probabilistic linguistic MAGDM problems. Let be a discrete set of alternatives, and with weight vector , where , , , and a set of experts . Suppose that there are qualitative attribute and their values are evaluated by qualified experts and denoted as linguistic expressions information .
Then, the PL-TOPSIS method is designed to solve the MAGDM problems with entropy weight. The detailed calculating steps are given as follows:
Step 1. Convert the linguistic information into probabilistic linguistic information and construct the probabilistic linguistic decision matrix , .
Step 2. Derive the normalized probabilistic linguistic matrix , . Thus, probabilistic linguistic information for the alternative with respect to the all the attribute can be expressed as: , .
Step 3. Compute the weight values with entropy.
The weight of attributes is very important in decision making problem. Many scholars focus on decision making problems with incomplete or unknown attributes weight information in different fuzzy environment [
70,
71,
72,
73,
74,
75]. Entropy [
43] is a conventional term from information theory which is also used to determine weight of attributes. The larger the value of entropy in a given attribute is, the smaller the differences in the ratings of alternatives with respect to this attribute. In turn, this means that this kind of attribute supplies less information and has a smaller weight. Firstly, the normalized decision matrix
is derived as follows:
Then, the information of Shannon entropy
is calculated as follows:
and
is defined as 0, if
.
Finally, the vector of attribute weights
is computed:
Step 4. Define the probabilistic linguistic positive ideal solution (PLPIS) and probabilistic linguistic negative ideal solution (PLNIS):
where
Step 5. Calculate the distances of each alternative from PLPIS and PLNIS, respectively:
Step 6. Calculate the probabilistic linguistic relative closeness degree (PLRCD) of each alternative from PLPIS.
Step 7. According to the , the ranking order of all alternatives can be determined. The best alternative is the one closest to PLPIS and farthest from the PLNIS. Thus, if any alternative has the smallest value, then, it is the most desirable alternative.