ARTICLE IN PRESS
Neurocomputing 70 (2007) 633–640
www.elsevier.com/locate/neucom
A novel quantum swarm evolutionary algorithm and its applications
Yan Wanga,, Xiao-Yue Fenga, Yan-Xin Huanga, Dong-Bing Pub, Wen-Gang Zhoua,
Yan-Chun Lianga, Chun-Guang Zhoua
a
College of Computer Science and Technology, Jilin University, Key Laboratory for Symbol Computation and Knowledge Engineering of the National
Education Ministry, Changchun 130021, China
b
Department of Computer Science, Northeast Normal University, Changchun 130024, China
Available online 13 October 2006
Abstract
In this paper, a novel quantum swarm evolutionary algorithm (QSE) is presented based on the quantum-inspired evolutionary
algorithm (QEA). A new definition of Q-bit expression called quantum angle is proposed, and an improved particle swarm optimization
(PSO) is employed to update the quantum angles automatically. The simulated results in solving 0–1 knapsack problem show that QSE is
superior to traditional QEA. In addition, the comparison experiments show that QSE is better than many traditional heuristic
algorithms, such as climb hill algorithm, simulation anneal algorithm and taboo search algorithm. Meanwhile, the experimental results
of 14 cities traveling salesman problem (TSP) show that it is feasible and effective for small-scale TSPs, which indicates a promising novel
approach for solving TSPs.
r 2006 Elsevier B.V. All rights reserved.
Keywords: Quantum swarm evolutionary algorithm; Quantum-inspired evolutionary algorithm; Particle swarm optimization; Knapsack problem;
Traveling salesman problem
1. Introduction
Quantum computing was proposed by Benioff and
Feynman [2,4] in the early 1980s. It was declared that
quantum computing could solve many difficult problems in
the field of classical computation, which was based on the
concepts and principles of quantum theory, such as
superposition of quantum states, entanglement and intervention. Because of its unique computational performance,
there has been a great interest in the application of the
quantum computing [5,14]. Han [6] proposed the quantuminspired evolutionary algorithm (QEA), which was inspired
by the concept of quantum computing. In QEA, the
smallest unit
" of
# information is called a Q-bit, which is
a
. Besides, a Q-gate was introduced as a
defined as
b
variation operator to promote the optimization of the
individuals Q-bit. Han and Yang (2003, 2004) [7,8,11,21]
Corresponding author.
E-mail addresses:
[email protected] (Y. Wang),
[email protected] (C.-G. Zhou).
0925-2312/$ - see front matter r 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.neucom.2006.10.001
have applied the QEA to some optimization problems and
applications, such as function optimization, face verification, blind source separation, etc.. The performance of the
QEA shows that it is better than the traditional evolutionary algorithms, such as the conventional genetic
algorithm (GA) [13], in many fields. Although the step size
delta involved in QEA is always a constant and should be
designed in compliance with application problems, it has
not had the theoretical basis till now.
Meanwhile, particle swarm optimization (PSO), which is a
population-based optimization strategy introduced by Kennedy and Eberhart [12], demonstrates good performance in
many function optimization problems and parameter
optimization problems in recent years. It is initialized with
a group of random particles and then updates their velocities
and positions with following formulae:
vðt þ 1Þ ¼ vðtÞ þ c1 randð Þ ðpbestðtÞ presentðtÞÞ
þ c2 randð Þ ðgbestðtÞ presentðtÞÞ,
presentðt þ 1Þ ¼ presentðtÞ þ vðt þ 1Þ.
Wang and Pang [18] have applied the PSO to solve the
traveling salesman problem (TSP). In addition, Feng and
ARTICLE IN PRESS
Y. Wang et al. / Neurocomputing 70 (2007) 633–640
634
Wang [3] proposed a hybrid algorithm AFTER–PSO for
combining forecasting. Wang and Huang (2004, 2005)
also applied the same strategy to establish two neural
network systems: a fuzzy neural network system based on
generalized class cover and a minimal uncertainty neural
networks based on Bayesian theorem. The new systems have
been successfully applied to identify the taste signals of tea
[10,19,20].
A novel quantum swarm evolutionary algorithm (QSE) is
proposed, which is based on QEA, in this article. The
proposed algorithm employs a novel quantum bit expression mechanism called quantum angle and adopts the
improved PSO to update the Q-bit automatically. The
simulated results show that QSE is superior to QEA and
many traditional heuristic algorithms, such as climb hill
algorithm, simulation anneal algorithm and taboo search
algorithm, in solving 0–1 knapsack problem. In addition,
the test result of 14 cities TSP problem shows the feasibility
to apply the QSE to solve the TSP.
The original QEA and PSO are introduced in Section 2,
and then some drawbacks of those methods are discussed
respectively. The definition of the quantum angle and the
procedure of quantum swarm evolutionary are illustrated in
Section 3. Experimental results of 0–1 knapsack problem
and TSP are shown and discussed in Section 4. Conclusions
are given in Section 5.
2. Original QEA and PSO
2.1. Quantum-inspired evolutionary algorithm
QEA was proposed by Han [6], which was inspired by the
concept of quantum computing. In QEA, the smallest"unit
#
a
of information is called a Q-bit, which is defined as
,
b
where a and b are complex numbers that specify the
probability amplitudes of the corresponding states. The
moduli |a|2 and |b|2 are the probabilities that the Q-bit
exists in state ‘‘0’’ and state ‘‘1’’, respectively, which satisfy
that jaj2 þ jbj2 ¼
"
# 1. And an m-Q-bits is defined as
a1 a2 . . . am
, where jai j2 þ jbi j2 ¼ 1 (i ¼ 1,2, y, m)
b1 b2 . . . bm
and m is the number of Q-bits [6].
The procedure of QEA is described as follows:
Procedure of QEA
Begin
Initialize Q(0) at t ¼ 0
Make P(0) by observing the state of Q(0)
Repair P(0)
Evaluate f ðX0j Þ
Store the best solutions among P(0) into B0 and f(B0)
While (not termination condition) do
Begin
t¼tþ1
Make P(t) by observing the state of Q(t)
Repair P(t)
Evaluate f ðXtj Þ
Update Q(t) using Q-gate U(t)
Store the best solutions among P(t) into Bt and
f(Bt)
End
End
where
QðtÞ ¼ fqt1 ; qt2 ; . . . ; qtn g,
fX t1 ; X t2 ; . . . ; X tn g,
t
qtj ¼
Xtj
"
#
atj1 atj2 . . . atjm
,
bt bt . . . bt
j1
j2
jm
PðtÞ ¼
. . . ; xtjm g,
B 2 Xj ,
¼
i ¼ 1; 2; . . . ; m, j ¼ 1; 2; . . . ; n, n is the size of the population.
2 3
a0ji
In the step of ‘‘initialize Q(0) at t ¼ 0‘‘, 4 0 5 of all qj0 in
bji
pffiffiffi
Q(0) are initialized with 1 2. It means that in each
m-Q-bits, qj0 represents the linear superposition of all
possible states with the same probability [6].
To obtain the binary string, the step of ‘‘make P(t) by
observing the state of Q(t)’’ can be implemented for each
Q-bit individual as follows. When observing the state of
Q(t), the value xtji ¼ 0 or 1 of P(t) is determined by the
probability jatji j2 or jbtji j2 [8].
fxtj1 ; xtj2 ;
Procedure make P(t)
Begin
j ¼ 0;
While (jon) do
j ¼ j þ 1;
i ¼ 0;
While (iom) do
i ¼ i þ 1;
If random ½0; 14jatji j2
Then xtji ¼ 1
Else xtji ¼ 0
End
End
End
The steps of ‘‘repair P(t)’’ and ‘‘evaluate f ðXtj Þ’’ are
according to the problems, where f(X) is the fitness
function.
The update procedure of Q-bits is introduced as
follows.
Procedure update Q(t)
Begin
j ¼ 0;
While (jon) do
j ¼ j þ 1;
i ¼ 0;
While (iom) do
i ¼ i þ 1;
ARTICLE IN PRESS
Y. Wang et al. / Neurocomputing 70 (2007) 633–640
where v(t) is the particle velocity, persent(t) is the current
particle. pbest(t) and gbest(t) are defined as individual best
and global best. Rand( ) is a random number between [0, 1].
c1, c2 are learning factors. Usually c1 ¼ c2 ¼ 2. To
accelerate searching velocity and to avoid oscillation, an
improvement of v that satisfies the convergence condition of
the particles is utilized in Section 3.2 and the following
experiments.
Determine Dyji with the lookup table
" t #
aji
Obtain
as:
btji
2
3
" t #
aji
at1
ji
¼ UðtÞ4 t1 5
btji
bji
End
End
End
3. Main results
Quantum gate (Q-gate) U(t) is a variable operator of
QEA. It can be chosen according to the problem. A
modified rotation gate used in QEA is as follows [21]:
" #
" 0#
cosðxðDyi ÞÞ sinðxðDyi ÞÞ ai
ai
¼
,
b0i
sinðxðDyi ÞÞ cosðxðDyi ÞÞ bi
where xðDyi Þ ¼ sðai ; bi Þ Dyi ; sðai ; bi Þ and Dyi represent the
rotation direction and angle, respectively. The lookup table
is presented in Table 1. Where delta is the step size and
should be designed in compliance with the application
problem. However, it has not had the theoretical basis till
now, even though it usually is set as small value. In the
comparison experiments, we set delta ¼ 0.01p. Our proposed algorithm, QSE, which was proposed in this work, is
based on another improved quantum rotation gate strategy.
2.2. Particle swarm optimization
vðt þ 1Þ ¼ vðtÞ þ c1 nrandð ÞnðpbestðtÞ presentðtÞÞ
þ c2 nrandð ÞnðgbestðtÞ presentðtÞÞ,
presentðt þ 1Þ ¼ presentðtÞ þ vðt þ 1Þ,
Table 1
A modified rotation gate lookup table
0
0
0
0
1
1
1
1
bi
0
0
1
1
0
0
1
1
f ðX Þ4f ðBÞ
False
True
False
True
False
True
False
True
Dyi
0
0
Delta
Delta
Delta
Delta
0
0
3.1. Quantum angle
In order to adopt PSO to update the Q-bit automatically,
we first give a definition on the quantum angle.
Definition 1. A quantum angle is defined as an arbitrary
angle y and a Q-bit is presented as [y].
"
#
sinðyÞ
Then [y] is equivalent to the original Q-bit as
. It
cosðyÞ
2
2
satisfies that j sinðyÞj
# ¼ 1 spontaneously. Then
" þ j cosðyÞj
a1 a2 . . . am
an m-Q-bits
could be replaced by
b1 b2 . . . bm
½y1 jy2 j . . . jym . The common rotation gate
" #
" 0#
cosðxðDyi ÞÞ sinðxðDyi ÞÞ ai
ai
¼
b0i
sinðxðDyi ÞÞ cosðxðDyi ÞÞ bi
is replaced by ½y0i ¼ ½yi þ xðDyi Þ.
PSO is a population-based optimization strategy introduced by Kennedy and Eberhart [12]. And it has demonstrated good performance in many function optimization
problems and parameter optimization problems in recent
years, such as solving TSP [18], combining forecasting [3]
and optimizing neural networks systems [10,19,20]. It is
initialized with a group of random particles and then
updates their velocities and positions with following
formulae:
xi
635
sðai ; bi Þ
ai bi 40
ai bi o0
ai ¼ 0
bi ¼ 0
0
0
+1
1
1
+1
0
0
0
0
1
+1
+1
1
0
0
0
0
0
71
71
0
0
0
0
0
71
0
0
71
0
0
Table 1 shows that y0i is only a simple function of Dyi,
cos yi, sin yi, B, X, f(B) and f(X). And as Delta has not had
the theoretical basis till now, the efficiency of quantum gate
½y0i ¼ ½yi þ xðDyi Þ is to be limited. Therefore, a novel QSE
is proposed in the following section.
3.2. Quantum swarm evolutionary algorithm
In this section, we use the concept of swarm intelligence
of PSO and regard all m-Q-bits in the population as an
intelligence group, which is named quantum swarm. First,
we find the local best quantum angle and the global best
value from the local ones. Then according to these values,
we update quantum angles by Q-gate. However, each
individual still has large random adjustment space at that
time.
The proposed procedure, which is called QSE, based on
the procedure of QEA is summarized as follows:
1. Use quantum angle to encode Q-bit,
QðtÞ ¼ fqt1 ; qt2 ; . . . ; qtn g;
qtj ¼ ½ytj1 jytj2 j . . . jytjm .
2. Make each xtji ¼ 0 or 1 of P(t) by observing the state of
Q(t) through j cosðyji Þj2 or j sinðyji Þj2 as follows:
Begin
j ¼ 0;
ARTICLE IN PRESS
Y. Wang et al. / Neurocomputing 70 (2007) 633–640
636
While (jon) do
j ¼ j+1; i ¼ 0;
While (iom) do
i ¼ i+1;
If random ½0; 14j cosðyji Þj2
Then xtji ¼ 1
Else xtji ¼ 0
End
End
End
3. Modify update procedure to update Q(t) with the
following improved PSO formulae instead of using
traditional Q-gate U(t):
vtþ1
¼ w ðo vtji þ C1 randð Þ ðytji ðpbestÞ ytji Þ
ji
þ C2 randð Þ ðyti ðgbestÞ ytji ÞÞ,
ytþ1
¼ ytji þ vjitþ1 ,
ji
where vtji , ytji , ytji ðpbestÞ and yti ðgbestÞ are the velocity,
current position, individual best and global best of the
ith Q-bit of the jth m-Qbits, respectively. Set w ¼ 0:99,
W ¼ 0:7298, C 1 ¼ 1:42 and C 1 ¼ 1:57, which satisfy the
convergence condition of the particles: W 4ðC 1 þ
C 2 Þ=2 1. Since C24C1, the particles will converge
faster to the global optimal position of the swarm than
the local optimal position of each particle, i.e., the
algorithm has global searching property [10,17].
4. Experimental results
4.1. Solving 0–1 knapsack problem
The 0–1 knapsack problem is described as: given a set of
items and a knapsack, select a subset
Pmof the items so as to
maximize
the
profit
f
ðXÞ
¼
subject to
i¼1 pi xi
Pm
i¼1 oi xi pC, where X ¼ fx1 . . . xm g, xi ¼ 0 or 1, oi is the
weight of the ith item, pi is the profit of the ith item, C is the
capacity of the knapsack, respectively. xi ¼ 1 if the ith item
is selected, otherwise xi ¼ 0. In the experiments, we used
the similar data sets as in Ref. [6]. Set random oi 2 ½1; 10,
pi ¼ oi þ l i , where the random
Pmfigure l i 2 ½0; 5, knapsack
capacity is set as C ¼ 1=2
i¼1 oi , and three knapsack
problems with 100, 250 and 500 items are considered. At
the same time, we employed the same profit evaluation
procedure and added the similar repair strategy mentioned
in Ref. [6], which was based on the structure of QSE
proposed
Procedure repair P(t)
Begin
Knapsack-overfilled
¼ false
P
t
If m
o
x
4C
then
i¼1 i i
Knapsack-overfilled ¼ true
While (knapsack-overfilled ¼ true) do
Select the jth item from the knapsack randomly
xj t P
¼0
t
If m
i¼1 oi xi pC then
knapsack-overfilled ¼ false
End
While (knapsack-overfilled ¼ false) do
Select the kth item from the knapsack randomly
¼1
xktP
t
If m
i¼1 oi xi 4C then
Knapsack-overfilled ¼ true
End
xk t ¼ 0
End
P
t
And the evaluate f ðXtj Þ is the profit f ðXtj Þ ¼ m
i¼1 pi xji .
The comparison of QSE and QEA on the knapsack
problem with different items and the same population size
is shown in Fig. 1. (The results of Fig. 1 are the average of
10 tests, which has the population size as 20, delta ¼ 0.01p,
and iteration times as 1000.) It shows that QSE is
better than QEA in both speeds and profits. Table 2 and
Fig. 2 show that QSE with larger population size
obtains faster convergent speed and dominates larger item
number, but spends more running time. (The results
of Fig. 2 are the average of 10 tests, which has population
Fig. 1. Comparison of QSE and QEA on the knapsack problems with 100, 250 and 500 items.
ARTICLE IN PRESS
Y. Wang et al. / Neurocomputing 70 (2007) 633–640
637
Table 2
Results of different population size and iteration times of QSE in knapsack problem
Item number
100
250
500
Size
Iteration times
Best profit
Time (s)
Best profit
Time (s)
Best profit
Time (s)
10
100
500
1000
100
500
1000
100
500
1000
413.96
449.11
453.18
443.12
452.66
455.96
445.64
453.89
457.03
1
3
5
2
6
12
2
8
16
1079.0
1119.5
1130.3
1096.8
1132.4
1137.2
1104.5
1132.7
1141.9
2
8
14
3
14
28
5
19
38
2083.5
2161.0
2174.4
2122.9
2182.6
2190.0
2128.6
2186.9
2190.0
4
16
34
7
33
67
10
46
98
20
30
Fig. 2. Comparison of different population size of QSE on the knapsack problems with 100, 250 and 500 items.
size as 10, 20, 30, respectively, and iteration times as 1000.
In Table 2, the running time is the second round of the
average time of 10 tests.) Fig. 3 and Table 3 show that
QSE is better than many traditional heuristic algorithms.
(In Fig. 3, each method tests 10 times and each test iterates
1000 times. In Table 3, all parameters are set as same as
Fig. 3 and the running time is the second round of the
average time of 10 tests.) The comparison experiment is
performed by heuristic algorithm tool kit [9]. It includes
several heuristic functions in solving 0–1 knapsack problems, such as the climb hill algorithm (hillks), simulation
anneal algorithm (saks) and taboo search algorithm
(tabuks). The test environment is P4 2.6G, 512M, Windows
XP, Matlab6.5.
4.2. Solving traveling salesman problem
As a well-known NP-hard combinatorial optimization
problem, TSP is a general and simple form of many
complicated problems in many different fields. It is easy to
be described but hard to be resolved [1,16].
Give a weighted graph G ¼ (S,D), where S ¼ ðS 1 ;
S 2 ; . . . ; Sm Þ is the set of cities and D ¼ fðSi ; S j Þ : S i ; Sj 2
Sg is the set of edges. Let d(Si,Sj) be a cost measure
associated with edge ðS i ; Sj Þ 2 D. In the following experi-
ment, city SiAS is given by its coordinates (xi,yi) and
d(Si,Sj) is the Euclidean distance between Si and Sj. The
object of TSP is to find a roundtrip of minimal total length
visiting each city exactly once.
According to the characteristic of the TSP and the
proposed QSE method, we encoded each Q-bit of Q ¼
fq1 ; q2 ; . . . ; qn g as q ¼ ½y11 j jy1c jy21 j jy2c j jym1 j
jymc by the quantum angle, where m is the number of
cities and c is a constant, which satisfies 2c Xm. Then we
treated each binary string fxi1 . . . xic g of the observing value
X ¼ fx11 . . . x1c ; x21 . . . x2c ; . . . ; xm1 . . . xmc g as the visited
sequence of the ith city, where i ¼ 1; 2; . . . ; m. Before
evaluating f(X), we first sorted all the binary strings
fxi1 . . . xic g to obtain visited
Pm1 sequenceS1 ! S 2 ! . . . ! S m .
dðSi ; S iþ1 Þ þ dðS m ; S1 Þ is calcuAnd then f ðXÞ ¼ i¼1
lated, where d(Si,Sj) denotes the distance between cities Si
and Sj. Therefore, the following repair and evaluate
procedure are employed.
Procedure repair P(t)
Begin
Sort all the binary strings fxti1 . . . xtic g.
If some fxtk1 1 . . . xtk1 c g,y, fxtkj 1 . . . xtkj c g have the same
values, which kj ¼ 1; 2; . . . ; m, then sort them randomly.
End
ARTICLE IN PRESS
Y. Wang et al. / Neurocomputing 70 (2007) 633–640
638
Fig. 3. Comparison of QSE (population size 20) and heuristic algorithms on knapsack problems, includes HILLKS, SAKS (anneal coefficient 0.99, initial
temperature 100) and TABUKS (taboo table is 20).
Table 3
Results of QSE vs. traditional heuristic algorithms in knapsack problem
Item number
100
250
Method
Best profit
Time (s)
Best profit
Time (s)
Best profit
Time (s)
QSE
HILLKS
SAKS
TABUKS
455.96
412.74
415.27
429.47
12
2
9
10
1137.2
1035.9
1045.8
1111.8
28
4
27
29
2190
2032.3
2048.8
2120.1
67
7
29
77
And the evaluate f ðXtj Þ is the total length
f ðXtj Þ ¼
m
1
X
i¼1
dðS tji ; Stjðiþ1Þ Þ þ dðS tjm ; S tj1 Þ.
The performance of the proposed algorithm using QSE
for TSP is examined by the benchmark problem BURMA14 with 14 nodes from [15]. The data for the symmetric
TSP of 14 cities are shown in Table 4.
The related parameters of the approach are set as
follows: n ¼ 20, c ¼ 4 and m ¼ 14.
After 500 iterations, the obtained optimal solution is
1 ! 10 ! 9 ! 11 ! 8 ! 13 ! 7 ! 12
! 6 ! 5 ! 4 ! 3 ! 14 ! 2.
Its cost is 30.8785, which is equal to the data that
was obtained by Wang and Huang [18]. The initial
random solution with cost 50.7981 and the best one
obtained by using QSE are shown in Fig. 4, respectively.
The scale of the searched space is the product of the
number of the individual, the Q-bit and the running
iterations, which is equal to 20 4 14 1500 ¼ 1
680 000. Therefore, it is easy to conclude that the searched
space is only 0.054% of the solution space. The experiment
is implemented on a PC (P4 2.6G, 512 M, Windows XP,
Matlab 6.5).
500
Table 4
The data for the symmetric traveling salesman problem of 14 cities
City position
X
Y
1
2
3
4
5
6
7
8
9
10
11
12
13
14
16.47
16.47
20.09
22.39
25.23
22.00
20.47
17.20
16.30
14.05
16.53
21.52
19.41
20.09
96.10
94.44
92.54
93.37
97.24
96.05
97.02
96.29
97.38
98.12
97.38
95.59
97.13
94.55
5. Conclusions
In this paper, a novel QSE is presented, which is based on
the QEA. A novel quantum bit expression mechanism
called quantum angle is employed and the improved PSO is
adopted to update the Q-bit automatically. The simulated
results in solving 0–1 knapsack problem show that QSE is
superior to traditional QEA. The comparison experiments
ARTICLE IN PRESS
Y. Wang et al. / Neurocomputing 70 (2007) 633–640
639
Fig. 4. (a) Initial solution (cost 50.7981), (b) best solution (cost 30.8785).
also show that QSE is better than many conventional
heuristic algorithms, such as climb hill algorithm, simulation anneal algorithm and taboo search algorithm. In
addition, the examination of solving TSP indicates that the
proposed approach of QSE obtained the best result by
searching a small-size proportion of the solution space. It
has also shown that a worse performance of the behavior
was observed when the number of the cities increased. This
can be ascribed to the binary string coding which we used to
represent the visit orders of the cities. The study on the
limitation of the binary coding is in progress. Future
research works will include how to find a more effective
method for choosing the parameters according to the
information of different problems.
Acknowledgment
This work was supported by the National Natural
Science Foundation of China under Grant No. 60433020,
the Science-technology Development Project of Jilin
Province of China under Grant No. 20050705-2, and the
985 Project of Jilin University.
References
[1] L.G. Bendall, Domination analysis beyond the traveling salesman
problem, Ph.D. Thesis, Department of Mathematics, University of
Kentucky, 2004.
[2] P. Benioff, The computer as a physical system: a microscopic
quantum mechanical hamiltonian model of computers as represented
by Turing machines, J. Stat. Phys. 22 (1980) 563–591.
[3] X.Y. Feng, Y.C. Liang, Y.F. Sun, H.P. Lee, C.G. Zhou, Y. Wang, A
hybrid algorithm for combining forecasting based on AFTER–PSO,
in: Proceedings of PRICAI 2004, Lecture Notes in Artificial
Intelligence, vol. 3157, Springer, Berlin, 2004, pp. 942–943.
[4] R. Feynman, Simulating physics with computers, Int. J. Theoret.
Phys. 21 (6) (1982) 467–488.
[5] L.K. Grover, Algorithms for quantum computation: discrete logarithms and factoring, in: Proceedings of the 35th Annual Symposium
on Foundations of Computer Science, IEEE Press, Piscataway, NJ,
1994, pp. 124–134.
[6] K.H. Han, J.H. Kim, Quantum-inspired evolutionary algorithm for a
class of combinatorial optimization, IEEE Trans. Evolut. Comput.,
6(6) (2002) 580–593.
[7] K.H. Han, J.H. Kim, Quantum-inspired evolutionary algorithms with
a new termination criterion, HeGate, and two-phase scheme, IEEE
Trans. Evol. Comput. 8 (2) (2004) 156–169.
[8] K.H. Han, K.H. Park, C.H. Lee, J.H. Kim, Parallel quantum-inspired
genetic algorithm for combinatorial optimization problem, in:
Proceedings of the 2001 IEEE Congress on Evolutionary Computation, Seoul, Korea, 2001, pp. 1422–1429.
[9] Heuristic Algorithm Tool Kit, Copyright 2002, Lars Aurdal/
Rikshospitalet. Online available: /http://www.idi.ntnu.no/lau/
Forelesninger/.
[10] Y.X. Huang, Y. Wang, W.G. Zhou, Z.Z. Yu, C.G. Zhou, A fuzzy
neural network system based on generalized class cover and particle
swarm optimization, in: Proceedings of ICIC 2005, Hefei, China,
Lecture Notes in Computer Science, vol. 3645, Springer, Berlin, 2005,
pp. 119–128.
[11] J.S. Jang, K.H. Han, J.H. Kim, Quantum-inspired evolutionary
algorithm-based face verification, in: Proceedings of Genetic Evolutionary Computation Conference Lecture Notes in Computer Science,
vol. 2724, Springer, Berlin, 2003, pp. 2147–2156.
[12] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in:
Proceeding of IEEE International Conference on Neural Networks,
vol. IV, Perth, Australia, 1995, pp. 1942–1948.
[13] Z. Michalewicz, Genetic Algorithms+Data Structures ¼ Evolution
Programs, third, revised and extended ed., Springer, Berlin, 1999.
[14] P.W. Shor, Quantum computing. Documenta Mathematica, in: Extra
Volume, Proceedings of the International Congress of Mathematicians, Berlin, Germany, 1998, pp. 467–486.
[15] Standard TSPLIB, 1997. Online available: http://elib.zib.de/pub/
Packages/mp-testdata/tsp/tsplib/tsplib.html.
[16] X.J. Tang, Genetic Algorithms with application to engineering
optimization, Ph.D. Thesis, University of Memphis, 2004.
[17] D. Van, F. Bergh, An Analysis of Particle Swarm Optimizers, Ph.D.
Thesis, Natural and Agricultural Science Department, University of
Pretoria, 2001.
[18] K.P. Wang, L. Huang, C.G. Zhou, W. Pang, Particle swarm
optimization for traveling salesman problem, in: Proceeding
of the 2nd ICMLC, Xi’an, China, Vol. 3, IEEE Press, 2003,
pp. 1583–1585.
[20] Y. Wang, C.G. Zhou, Y.X. Huang, X.Y. Feng, Training minimal
uncertainty neural networks by Bayesian theorem and particle swarm
optimization, in: Proceedings of ICONIP 2004, Calcutta, India,
Lecture Notes in Computer Science, vol. 3316, Springer, Berlin, 2004,
pp. 579–584.
[19] Y. Wang, C.G. Zhou, Y.X. Huang, X.Y. Feng, Identification of taste
signals of tea based on minimal uncertainty neural networks, Comput.
Res. Dev. (in Chinese) 42 (1) (2005) 66–71.
[21] J.A. Yang, B. Li, Z.Q. Zhuang, Z.F. Zhong, Quantum genetic
algorithm and its application research in blind source separation,
Mini-Micro Systems (in Chinese) 24 (8) (2003) 1518–1523.
ARTICLE IN PRESS
640
Y. Wang et al. / Neurocomputing 70 (2007) 633–640
Yan Wang, born in 1978, is currently a Ph.D.
student in the College of Computer Science and
Technology, Jilin University, China. His research
interests are computational intelligence, pattern
recognition, quantum-inspired evolutionary computation and bioinformatics. He has published
over 20 papers.
Wengang Zhou, born in 1981, is a postgraduate in
the College of Computer Science and Technology,
Jilin University, China. His research interests
include computational intelligence and bioinformatics.
Xiaoyue Feng, born in 1977, is currently a Ph.D.
student in the College of Computer Science and
Technology, Jilin University, China. Her research
interests are neural networks, combining forecasting, and quantum-inspired evolutionary computation.
Yanchun Liang, born in 1953, is a professor in the
College of Computer Science and Technology,
Jilin University, China. He is interested in
computational intelligence and bioinformatics.
Yanxin Huang, born in 1967, is a senior lecturer in
the College of Computer Science and Technology,
Jilin University, China. He is interested in the
theory and applications of computational intelligence, pattern recognition.
Chunguang Zhou, born in 1947, is currently a
professor and the Dean in the College of
Computer Science and Technology, Jilin University, China. He is interested in the theory and
applications of computational intelligence, pattern recognition and image processing.
Dongbing Pu, born in 1970, is a lecturer in the
College of Computer, Northeast Normal University, China. His research interests are computational intelligence and embedded system.