Mathematics 09 02976 v2
Mathematics 09 02976 v2
Mathematics 09 02976 v2
Article
Multimodal Identification Based on Fingerprint and Face
Images via a Hetero-Associative Memory Method
Qi Han 1 , Heng Yang 1, * , Tengfei Weng 2 , Guorong Chen 1 , Jinyuan Liu 1 and Yuan Tian 1
1 College of Intelligent Technology and Engineering, Chongqing University of Science and Technology,
Chongqing 401331, China; [email protected] (Q.H.); [email protected] (G.C.); [email protected] (J.L.);
[email protected] (Y.T.)
2 College of Electrical Engineering, Chongqing University of Science and Technology, Chongqing 401331,
China; [email protected]
* Correspondence: [email protected]
Abstract: Multimodal identification, which exploits biometric information from more than one
biometric modality, is more secure and reliable than unimodal identification. Face recognition and
fingerprint recognition have received a lot of attention in recent years for their unique advantages.
However, how to integrate these two modalities and develop an effective multimodal identification
system are still challenging problems. Hetero-associative memory (HAM) models store some patterns
that can be reliably retrieved from other patterns in a robust way. Therefore, in this paper, face and
fingerprint biometric features are integrated by the use of a hetero-associative memory method for
multimodal identification. The proposed multimodal identification system can integrate face and
fingerprint biometric features at feature level when the system converges to the state of asymptotic
stability. In experiment 1, the predicted fingerprint by inputting an authorized user’s face is compared
Citation: Han, Q.; Yang, H.; Weng, T.;
with the real fingerprint, and the matching rate of each group is higher than the given threshold. In
Chen, G.; Liu, J.; Tian, Y. Multimodal experiment 2 and experiment 3, the predicted fingerprint by inputting the face of an unauthorized
Identification Based on Fingerprint user and the stealing authorized user’s face is compared with its real fingerprint input, respectively,
and Face Images via a Hetero- and the matching rate of each group is lower than the given threshold. The experimental results
Associative Memory Method. prove the feasibility of the proposed multimodal identification system.
Mathematics 2021, 9, 2976.
https://doi.org/10.3390/math9222976 Keywords: stability; multimodal identification; fingerprint recognition; face recognition
Academic Editor:
Ezequiel López-Rubio
1. Introduction
Received: 28 October 2021
Accepted: 16 November 2021
With the rapid development of science and technology, people pay more attention
Published: 22 November 2021
to security identification than ever before, and new theories and technologies continually
emerge for identity authentication. Traditional identification methods include key, pass-
Publisher’s Note: MDPI stays neutral
word, code, identification card, and so on. One of the weaknesses of these methods is that
with regard to jurisdictional claims in
unauthorized persons can fabricate or steal protected data and make use of the rights of
published maps and institutional affil- authorized users to engage in illegal activities. Though these traditional identification tech-
iations. nologies, which usually face various threats in real world, are still playing an indispensable
role on various occasions with a low request of security for their convenience and low
cost, increasingly more consumers and enterprises choose to use biometric identification
in numerous fields. Biometric identification technologies such as face recognition [1–4],
Copyright: © 2021 by the authors.
fingerprint recognition [5–7], and gait recognition [8–10] are more secure and convenient
Licensee MDPI, Basel, Switzerland.
than traditional technologies.
This article is an open access article
Biometric identification refers to the automated recognition of individuals based
distributed under the terms and on their biological or behavioral characteristics [11]. It is closely combined with high-
conditions of the Creative Commons tech means such as optics, acoustics, biosensors, and biostatistics. Biometrics finds its
Attribution (CC BY) license (https:// applications in the following areas: access control to facilities and computers, criminal
creativecommons.org/licenses/by/ identification, border security, access to nuclear power plants, identity authentication in
4.0/). network environment, airport security, issue of passports or driver licenses, and forensic
and medical databases [12]. Biometric identification can facilitate a well-rounded solution
for system identification and maintain a reliable and secure system. Biometric technology
has started to become a booming field and an important application direction of a cross
subject between computer science and biology. Unimodal biometric systems, such as fin-
gerprint identification system and face identification, have been studied in many previous
articles [6,13–20].
Through the studies of recent years, it is evident that multimodal biometric identifica-
tion technologies that use many kinds of biometric characteristics to identify individuals are
more secure and accurate than unimodal ones. They take advantage of multiple biometric
traits to improve the performance in many aspects including accuracy, noise resistance,
universality, and spoof attacks, and reduce performance degradation in huge database
applications [21]. Multi-biometric feature fusion is a crucial step in multimodal biometric
systems. The strength of the feature fusion technique lies in its ability to derive highly
discriminative information from original multiple feature sets and to eliminate redundant
information that results from the correlation between distinct feature sets, thus gaining the
most effective feature set with low dimensionality for the final decision [22]. On the process
of multimodal identification research, several new algorithms and applications have been
studied in recent years. For example, the authors of [11] presented a multimodal biometric
approach based on the fusion of the finger vein and electrocardiogram (ECG) signals. The
application of canonical correlation analysis (CCA) in multimodal biometric field attracted
many researchers [23,24], who employed CCA to fuse gait and face cues for human gender
recognition. Multimodal biometric identification system based on finger geometry, knuckle
print, and palm print was proposed in [21]. Face–iris multimodal biometric system using a
multi-resolution Log–Gabor filter with spectral regression kernel discriminant analysis was
studied in [25]. The authors of [26] proposed an efficient multimodal face and fingerprint
biometrics authentication system on space-limited tokens, e.g., smart cards, driver license,
and RFID cards. The authors of [27] proposed a novel multimodal biometric identification
system for face–iris recognition, based on binary particle swarm optimization and solving
the problem of mutually exclusive redundant features in combined features. Dialog Com-
munication Systems (DCS AG) developed BioID in [28], a multimodal identification system
that uses three different features—face, voice, and lip movement—to identify people.
In [29], a frequency-based approach results in a homogeneous biometric vector, integrating
iris and fingerprint data. The authors of [30] proposed a deep multimodal fusion network
to fuse multiple modalities (face, iris, and fingerprint) for person identification. They
demonstrate an increase in multimodal person identification performance by utilizing the
proposed multi-level feature abstract representations in our multimodal fusion, rather than
using only the features from the last layer of each modality-specific CNN. However, the
system in [30] based on CNNs cannot be used for small samples.
Associative memory networks are single layer nets that can store and recall patterns
based on data content rather than data address [31]. Associative memory (AM) systems can
be divided into hetero-associative memory (HAM) systems and auto-associative memory
(AAM) systems. When the input pattern and the output pattern are the same pattern, the
system can be called an AAM system. The HAM model, which stores coupling information
based on input–output patterns, can recall a stored output pattern by receiving a different
input pattern. In [32], to protect the face features database fundamentally, a new face
recognition method by AAM based on RNNs is proposed without establishing a face
feature database, in which the face features are transformed into the parameters of the
AAM model. We notice that the HAM models can construct the association between
the input and output patterns in a robust way, and this association can be regarded as
feature fusion of two different kinds of patterns. Thus, HAM models should be able to fuse
multiple biometric features in a robust way. Furthermore, the multimodal identification
system can be built by HAM models.
Considering the advantages of multimodal identification and the fusion capability
of HAM models, in this paper, the HAM model, which can store fusion features of face–
Mathematics 2021, 9, 2976 3 of 14
fingerprint patterns and recall a predictable fingerprint pattern by receiving a face pattern,
is constructed. The model is based on a cellular neural network, which belongs to a class
of recurrent neural networks (RNNs). The stability of the HAM model is a prerequisite
for its successful application in a multimodal identification system. Thus, the asymptotic
stability of the HAM model is also analyzed and discussed. In this paper, we also propose
a multimodal identification system based on fingerprint and face images by the HAM
method. Our three contributions in this paper are highlighted as follows.
At the fusion stage, the main work is to establish the HAM model, which stores
information of feature fusion using the HAM method. The HAM model, which is used
for feature fusion, is based on an improved HAM method, and the established model
can store the coupling information of the face and fingerprint patterns of the authorized
users. The first step is to acquire face images and fingerprint images of the authorized
users using some feature extractor device. The raw images are preprocessed, including the
processes of gray level transformation, image binarization, and segmentation. The regions
of interest (ROIs) of face images and fingerprint images after preprocessing are used to
fuse both face and fingerprint biometric features using the HAM method. The parameters
that come from the feature fusion institute crucial model coefficients of the HAM model.
Then, the established HAM model can recall the fingerprint pattern of one authorized user
by receiving the face pattern of the user when the model converges to the asymptotically
stable equilibrium point. If the established model could not converge to the asymptotically
stable equilibrium point, the fusion parameters, namely model coefficients, would not be
given. The HAM model stores two kinds of biometric features of all authorized users as
one group of model coefficients, and those biometrical features cannot be decrypted easily
in the reversible method.
In the identification stage, the HAM model established in the fusion stage is used
to test the legitimacy of the visitors. Firstly, the face image and fingerprint image of one
visitor are acquired using proper feature extractor devices in the identification stage. The
visitor’s face pattern after preprocessing is sent to the HAM model established in the fusion
stage. Then, there will be an output pattern when the established HAM model converges
to the asymptotically stable equilibrium point. By comparing the model’s output pattern
with the visitor’s real fingerprint pattern after preprocessing, the recognition pass rate of
the visitor can be obtained. If the numerical value of the recognition rate of the visitor
exceeds a given threshold, the identification is successful and the visitor has the rights of
authorized users. Instead, the visitor is an illegal user.
3. Research Background
In this section, we briefly introduce the HAM model, which is based on a class of
recurrent neural networks, as well as the background knowledge of the system stability
and variable gradient method.
. n
∂V . . . T .
V (x) = ∑ ( ∂xi · xi ) = ( gradV (x))T [x1 , . . . , xn ] = ( gradV ( x ))T x (6)
i =1
Mathematics 2021, 9, 2976 6 of 14
It can be seen from (6) that V ( x ) can be obtained by the line integral of gradV, namely,
Z x Z x n
V (x) =
0
( gradV )T dx = ∑ xi ∇Vi dxi
0 i =1
(7)
If n-dimensional curl of gradV is equal to zero, namely, rot( gradV ) = 0, then V can
be regarded as a conservative field, and the line integral shown in the above formula (7)
is independent of the path. The necessary and sufficient condition for rot( gradV ) = 0 is
∂∇Vi /∂x j = ∂∇Vj /∂xi , ∀i, j = 1, 2 . . . , n. Therefore, for convenience, Formula (7) can be
rewritten as
Z x Z x2 Z xn
1
V (x) = ∇V1 ( x1 ,0,...,0) dx1 + ∇V2 ( x1 ,x2 ,0,...,0) dx2 + · · · + ∇Vn ( x1 ,x2 ,x3 ,...,xn ) dxn (8)
0 0 0
.
By selecting appropriate coefficients such that V ( x ) is negative definite and rot( gradV )
is equal to zero. If V ( x ) is positive definite, then the second method of Lyapunov is proved,
and the system is asymptotically stable at the equilibrium point.
4. Main Results
In this section, under the research background, the asymptotic stability of the HAM
model with multiple time-varying delays using variable gradient method and the algorithm
of feature fusion by the HAM method are presented successively.
Theorem 1. There is a stable equilibrium point in system (2), which makes the HAM model
asymptotically stable.
Proof of Theorem 1. As f is bounded, it can be proved that system (2) has at least one
equilibrium point using Schauder fixed point theorem. Assuming that s∗ = (s1∗ , s2∗ , . . . , s∗n ) T
is an equilibrium point in the neural network.
Let xi (t) = si (t) − si∗ and f ( xi (t)) = f (si (t)) − f (si∗ ) = f ( xi (t) + si∗ ) − f (si∗ ), then (1)
can be rewritten as
n n
.
x i ( t ) = − p i x i ( t ) + ∑ q ij f ( x j ( t )) + ∑ rij u j (t − τij (t)) + ci
j =1 j =1
n (9)
ci = ∑ qij f (s∗j ) − pi si∗ + vi , i = (1, 2, . . . , n)
j =1
For the HAM model (9), if there exist the Lyapunov function V ( x ), and the model’s
equilibrium point is x ∗ = ( x1∗ , x2∗ , . . . , xn∗ ) T = 0, the single value gradient of (9) can be
defined as Equation (5). From Equation (6),
. .
V (x) = ( gradV ( x ))T x
. . (10)
= ( a11 x1 + a12 x2 + . . . + a1n xn ) x1 + . . . + ( an1 x1 + an2 x2 + . . . + ann xn ) x n
. n
. . .
V (x) = a11 x1 x1 + . . . + ann xn x n = ∑ akk xk xk
k =1
n n n
!! (11)
= ∑ akk xk (t) − pk xk (t) + ∑ qkj f (x j (t)) + ∑ rkj u j (t − τkj (t)) + ck
k =1 j =1 j =1
Mathematics 2021, 9, 2976 7 of 14
!
n n
.
When sk = 0, from Equation (1), s∗k = ∑ qkj f (s j (t)) + ∑ rkj u j (t − τkj (t)) + vk /pk .
j =1 j =1
n n
If xk (t) > 0, i.e., sk (t) − s∗k > 0, then pk sk (t) > ∑ qkj f (s j (t)) + ∑ rkj u j (t − τkj (t))+vk . By
j =1 j =1
n n
replacing sk (t) with xk (t), the inequality − pk xk (t) + ∑ qkj f (x j (t)) + ∑ rkj u j (t − τkj (t)) +
j =1 j =1
ck < 0 can be obtained. Analogously, if xk (t) < 0, it can be proved that − pk xk (t) +
n n .
∑ qkj f (x j (t)) + ∑ rkj u j (t − τkj (t)) + ck > 0. Therefore, both cases can lead to V (x) < 0,
j =1 j =1
.
namely V ( x ) is negative definite. Furthermore, it is clear that ∂∇Vi /∂x j = ∂∇Vj /∂xi = 0,
∀i, j = 1, 2 . . . , n. Therefore, from Equation (8), the Lyapunov function can be obtained as
R x1 Rx Rx
V (x) = 0 ∇V (x1 ,0,...,0) dx1 + 0 2 ∇V (x1 ,x2 ,0,...,0) dx2 + · · · + 0 n ∇V (x1 ,x2 ,...,xn ) dxn
Rx Rx Rx
= 0 1 a11 x1 dx1 + 0 2 ( a21 x1 + a22 x2 )dx2 + . . . + 0 n ( an1 x1 + . . . + ann xn )dxn (12)
R x1 R x2 R xn
= 0 a11 x1 dx1 + 0 a22 x2 dx2 + . . . + 0 ann xn dxn
which is always positive definite. Then, we proved the HAM model is asymptotically
stable at the equilibrium point using the variable gradient method.
Remark 1. The HAM method is used to fuse each authorized user’s face and fingerprint bio-
metric features. The face and fingerprint patterns of each authorized user are the input vector
β n×1 = [ β 1 , β 2 , . . . , β n ] T and output vector αn×1 = [α1 , α2 , . . . , αn ] T of the neural network
model, respectively. When the established HAM model converges to the asymptotically stable
equilibrium point, the output vector can be obtained by receiving an input vector, i.e., the fingerprint
pattern can be recalled by the face pattern of the authorized user.
Proof of Theorem 2. In (2), s∗ = [s1∗ , s2∗ , . . .n, s∗n ] T . Define the equilibrium of theoHAM
model s∗ = [s1∗ , s2∗ , . . . , s∗n ] T . α = f (s∗ ) ∈ α = (α1 , α2 , . . . , αn ) T αi = +1 or − 1 is an
equilibrium point in the neural network.
Mathematics 2021, 9, 2976 8 of 14
For the first case, consider αi = +1, then λαi > pi . When Qα + Rβ + V = λα,
n n
according to Lemma 1 (i), ∑ qij α j + ∑ rij β j + vi > pi . For the second case, consider
j =1 j =1
n
αi = −1, then λαi < − pi . When Qα + Rβ + V = λα, according to Lemma 1 (ii), ∑ qij α j +
j =1
n
∑ rij β j + vi < − pi . Therefore, the HAM model (2) converges to a stable equilibrium
j =1
point s∗ , where |s∗ |> 1 .
Given S = α and U = β, in which α and β are the feature vectors extracted from the
fingerprint and face images of one authorized user after preprocessing, respectively.
It is obvious that, when α and β meet the condition in Theorem 2, the coupling
relationship of the face and fingerprint patterns of one authorized user is established, and
the fusion features are transformed into HAM model parameters. The HAM model, which
stores fusion features of face and fingerprint patterns of the user, can recall a predictable
fingerprint pattern Ŝ by receiving a stored face pattern U. The HAM model network is of
size N × M. Let the neighborhood radius be 1, then there are eighteen unknown connection
weights and one unknown bias value vi for one neuron. Denote the nineteen unknown
parameters of the ith neuron as Φi = [qi_1 , qi_2 , . . . , qi_8 , qi_9 , ri_1 , ri_2 , . . . , ri_8 ,ri_9 , vi ] T .
Remark 2. In the fusion stage, the established HAM model can store fusion features of all authorized
users. Therefore, all model parameters Φi (i = 1, 2, . . . , n) to be obtained should be determined by
the face and fingerprint patterns of all authorized users.
For m authorized users, Qα + Rβ + V = λα can be transformed as
∆i Φi = e
αi λ (i = 1, 2, . . . , n) (14)
Remark 3. When the established HAM model, which stores biometric fusion features of all
authorized users, receives a face pattern vector of an unauthorized user, there will exist a forecasting
fingerprint pattern output of the visitor. In [32], the input pattern and forecasting output pattern
are the same biometric pattern. It uses the AAM network structure, which fuses the face input
and the same face output, but it cannot achieve the fusion of different biological models. In this
paper, two different biometric patterns are studied. This is the first attempt to integrate two different
biometric features using the HAM method.
Mathematics 2021, 9, 2976 9 of 14
Furthermore, the convolutional neural network needs a lot of data for training, which is
difficult to train for small samples, so we do not use the convolutional neural network for small
sample data in this paper.
5.1. Experiment 1
We assume that the face image and fingerprint image in each group come from the
same person. Seven groups of images of authorized users from two databases mentioned
above are shown in Figure 2. The first step in the biometric identification system is to
extract region of interests (ROIs). In our experiments, all face image ROIs and fingerprint
image ROIs used in our experiments after preprocessing are 35 × 25 pixels in size.
Mathematics 2021, 9, 2976 10 of 14
The seven groups of face patterns and fingerprint patterns are used to solve the model
parameters Φi (i = 1, 2, . . . , 875). Let pi = 1(i = 1, 2, . . . , 875) and λ = 2. The fingerprint
feature vectors (α(1) , α(2) , . . . , α(7) ) and the face feature vectors ( β(1) , β(2) , . . . , β(7) ) can be
obtained from the seven groups of face patterns and fingerprint patterns of all authorized
(1) (1) (1) (2) (2) (2) (7) (7) (7) (1) (1) (1)
users. E1 , E2 , . . . , E35 , E1 , E2 , . . . , E35 , . . . , E1 , E2 , . . . , E35 and F1 , F2 , . . . , F35 ,
(2) (2) (2) (7) (7) (7)
F1 , F2 , . . . , F35 , . . . , F1 , F2 , . . . , F35 were obtained by face feature vectors and finger-
print feature vectors, respectively. According to the feature fusion algorithm, the matrix
∆1 , . . . , ∆875 was obtained. Furthermore, α̂1 , α̂2 , . . . , α̂875 was obtained through the matrix
transform method. Finally, Φi (i = 1, 2, . . . , 875) was calculated using the matrix operation.
According to the proposed HAM method in Section 4, when the unestablished HAM
model comes to the asymptotic stable equilibrium point, the internal coupling relationship
between face and fingerprint patterns will be built by solving the model parameters.
The established multimodal identification system fused face and fingerprint biometrics
in the fusion stage. The matcher pass rate can be obtained by comparing Se and Ŝ when the
system input is one of the face patterns of the authorized users. We testified the matcher
pass rate as shown in Table 1, whose results prove the effectiveness of the multimodal
identification system.
Table 1. The recognition pass rate of the multimodal identification system for authorized users.
5.2. Experiment 2
The results of the experiment above test the feasibility and efficiency of the algorithm.
Provided that an unauthorized user has access to the identification system, the matcher
pass rate must be low enough for the system to reject illegal users. In this experiment, we
choose seven groups of unauthorized users whose fingerprints and faces are different from
the groups in Experiment 1. The flow diagram of identification is shown in Figure 3.
Mathematics 2021, 9, 2976 11 of 14
Figure 3. Seven groups of biometric images of authorized users (The flow diagram).
In this experiment, we found that the pass rate of unauthorized users is much lower
than the identification matcher threshold. Hence, those users who attempted to spoof
this identification system were identified as illegal users. We obtained seven groups of
unauthorized users’ identification results, shown in Table 2.
Table 2. The matcher pass rate of the multimodal identification system for unauthorized users.
Consider the case wherein attacker who has the forged fingerprint or the forged face
of one authorized user through illegal means beforehand wants to cheat the system. As
the illegal attacker completely hacked one kind of biometrical information, it is easy to
cheat single-mode identification system if there is no extra validation. However, in the
multimodal identification system, the attacker cannot spoof this identification system easily.
Group 15 to Group 21 are the attackers who have face information of the authorized users
(Group 1 to Group 7), respectively. Further, Group 22 to Group 28 are the attackers who
have fingerprint information of the authorized users (Group 1 to Group 7), respectively.
The identification results are shown in Table 3. The results of the experiment proved the
security of our proposed system.
The experiment results prove the feasibility of the proposed multimodal identification
system based on the HAM method. It can guarantee that the authorized users have access,
while the unauthorized users and attackers have no access. The proposed identification
method by fusing two different biometric modalities based on the HAM method applies
not only to the situation of fusing the face and fingerprint feature, but also to other different
biometric modalities.
Mathematics 2021, 9, 2976 12 of 14
Table 3. The matcher pass rate of the multimodal identification system for unauthorized users.
6. Conclusions
To solve the multimodal identification problem based on face and fingerprint images,
in this paper, we proposed a new feature fusion method for multimodal identification
based on the HAM model, which can well fuse face features and fingerprint features
of the authorized users. In the process of constructing the multimodal identification
system, the stability of the established network model is discussed. We prove that the
HAM model can reach the asymptotically stable state when the HAM model fuses face
and fingerprint biometrics. The proposed multimodal identification system can integrate
face and fingerprint biometric features at feature level when the system converges to the
state of asymptotic stability. In Section 5, we test the effectiveness and security of the
proposed multimodal identification system based on face and fingerprint images using
two experiments.
Author Contributions: Conceptualization, Q.H. and H.Y.; methodology, H.Y.; software, T.W.;
validation, G.C.; formal analysis, J.L.; investigation, Y.T.; writing—original draft preparation, H.Y.;
writing—review and editing, Q.H.; visualization, H.Y.; supervision, Q.H. All authors have read and
agreed to the published version of the manuscript.
Funding: This research was funded in part by CAS “Light of West China” Program, in part by
Research Foundation of The Natural Foundation of Chongqing City (cstc2021jcyj-msxmX0146), in
part by Scientific and Technological Research Program of Chongqing Municipal Education Com-
mission (KJZD-K201901504, KJQN 201901537), in part by humanities and social sciences research of
Ministry of Education (19YJCZH047), and in part by Postgraduate Innovation Program of Chongqing
University of Science and Technology (YKJCX2020820). The authors would like to thank the support
of China Scholarship Council.
Informed Consent Statement: All the images and data used in this article were taken from
public repositories.
Data Availability Statement: The data used to support the findings of this study are available from
the corresponding author.
Conflicts of Interest: The authors declare no conflict of interest.
Mathematics 2021, 9, 2976 13 of 14
Appendix A
(k) (k)
0 α ( ξ −1) M +1 α ( ξ −1) M +2
(k) (k)
(k)
α (k) (k)
0 E1 E2
( ξ −1) M +1 α ( ξ −1) M +2 α ( ξ −1) M +3
(k) (k) (k)
(k) (k) (k) E E2 E3
1
α α ( ξ −1) M +3 α ( ξ −1) M +4
(k) (k) (k) (k)
Eξ = (ξ −1.) M+2 E(k) = E2 E3 E4
.. .. ..
. .
.. .. ..
. . .
α(k) (k) (k)
ξ M −2 α ξ M −1 αξ M (k) (k)
E N −1 EN 0 n ×9
(k) (k)
α ξ M −1 αξ M 0
M ×3
(k) (k)
0 β ( ξ −1) M +1 β ( ξ −1) M +2
(k) (k)
(k)
β (k) (k)
0 F1 F2
( ξ −1) M +1 β ( ξ −1) M +2 β ( ξ −1) M +3
(k) (k) (k)
(k) (k) (k) F F2 F3
1
β β ( ξ −1) M +3 β ( ξ −1) M +4
(k) (k) (k) (k)
Fξ = (ξ −1.) M+2 F (k) = F2 F3 F4
.. .. ..
. .
.. .. ..
. . .
β(k) (k) (k)
ξ M −2 β ξ M −1 βξ M (k) (k)
FN −1 FN 0 n ×9
(k) (k)
β ξ M −1 βξ M 0
M ×3
References
1. Wang, S.-H.; Phillips, P.; Dong, Z.-C.; Zhang, Y.-D. Intelligent facial emotion recognition based on stationary wavelet entropy and
Jaya algorithm. Neurocomputing 2018, 272, 668–676. [CrossRef]
2. Zhang, Y.-D.; Yang, Z.-J.; Lu, H.; Zhou, X.-X.; Phillips, P.; Liu, Q.-M.; Wang, S. Facial Emotion Recognition Based on Biorthogonal
Wavelet Entropy, Fuzzy Support Vector Machine, and Stratified Cross Validation. IEEE Access 2016, 4, 8375–8385. [CrossRef]
3. Lawrence, S.; Giles, C.L.; Tsoi, A.C.; Back, A.D. Face recognition: A convolutional neural-network approach. IEEE Trans. Neural
Netw. 1997, 8, 98–113. [CrossRef]
4. Tan, X.; Triggs, W. Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions. IEEE Trans.
Image Process. 2010, 19, 1635–1650. [CrossRef]
5. Barni, M.; Scotti, F.; Piva, A.; Bianchi, T.; Catalano, D.; Di Raimondo, M.; Labati, R.D.; Failla, P.; Fiore, D.; Lazzeretti, R.; et al.
Privacy-preserving fingercode authentication. In Proceedings of the 12th ACM Workshop on Multimedia and Security,
New York, NY, USA, 9–10 September 2010; pp. 231–240.
6. Jain, A.; Hong, L.; Pankanti, S.; Bolle, R. An identity-authentication system using fingerprints. Proc. IEEE 1997, 85, 1365–1388.
[CrossRef]
7. Wahab, A.; Chin, S.H.; Tan, E.C. Novel approach to automated fingerprint recognition. IEE Proc. Vis. Image Signal Process. 1998,
145, 160–166. [CrossRef]
8. Bashir, K.; Xiang, T.; Gong, S. Gait recognition without subject cooperation. Pattern Recognit. Lett. 2010, 31, 2052–2060. [CrossRef]
9. Han, J.; Bhanu, B. Individual recognition using gait enduergy image. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 28, 316–322.
[CrossRef]
10. Wang, L.; Tan, T.; Hu, W.; Ning, H. Automatic gait recognition based on statistical shape analysis. IEEE Trans. Image Process. 2003,
12, 1120–1131. [CrossRef] [PubMed]
11. Su, K.; Yang, G.; Wu, B.; Yang, L.; Li, D.; Su, P.; Yin, Y. Human identification using finger vein and ECG signals. Neurocomputing
2019, 332, 111–118. [CrossRef]
12. Meenakshi, V.S.; Padmavathi, G. Security analysis of password hardened multimodal biometric fuzzy vault with combined
feature points extracted from fingerprint, iris and retina for high security applications. Procedia Comput. Sci. 2010, 2, 195–206.
[CrossRef]
13. Bronstein, A.M.; Bronstein, M.M.; Kimmel, R. Three-dimensional face recognition. Int. J. Comput. Vis. 2005, 64, 5–30. [CrossRef]
14. Gu, J.; Zhou, J.; Yang, C. Fingerprint recognition by combining global structure and local cues. IEEE Trans. Image Process. 2006, 15,
1952–1964. [PubMed]
15. Haq, E.U.; Xu, H.; Khattak, M.I. Face recognition by SVM using local binary patterns. In Proceedings of the 14th Web Information
Systems & Applications Conference IEEE, Liuzhou, China, 11–12 November 2017.
16. Kasban, H. Fingerprints verification based on their spectrum. Neurocomputing 2016, 171, 910–920. [CrossRef]
17. Medina-Pérez, M.A.; Moreno, A.M.; Ballester, M.; Ángel, F.; García-Borroto, M.; Loyola-González, O.; Altamirano-Robles, L.
Latent fingerprint identification using deformable minutiae clustering. Neurocomputing 2016, 175, 851–865. [CrossRef]
Mathematics 2021, 9, 2976 14 of 14
18. Nefian, A.V.; Hayes, M.H. An embedded HMM-based approach for face detection and recognition. In Proceedings of the IEEE
International Conference on Acoustics, Speech, and Signal Processing, Phoenix, AZ, USA, 15–19 March 1999; pp. 3553–3556.
19. Zhao, C.; Miao, D. Two-dimensional color uncorrelated principal component analysis for feature extraction with application
to face recognition. In Proceedings of the Chinese Conference on Biometric Recognition, Jinan, China, 16–17 November 2013;
pp. 138–145.
20. Zhong, F.; Zhang, J. Face recognition with enhanced local directional patterns. Neurocomputing 2013, 119, 375–384. [CrossRef]
21. Zhu, L.; Zhang, S. Multimodal biometric identification system based on finger geometry, knuckle print and palm print. Pattern
Recognit. Lett. 2010, 31, 1641–1649. [CrossRef]
22. Ahmad, M.I.; Woo, W.L.; Dlay, S. Non-stationary feature fusion of face and palmprint multimodal biometrics. Neurocomputing
2016, 177, 49–61. [CrossRef]
23. Sun, Q.-S.; Zeng, S.-G.; Liu, Y.; Heng, P.-A.; Xia, D.-S. A new method of feature fusion and its application in image recognition.
Pattern Recognit. 2005, 38, 2437–2448. [CrossRef]
24. Shan, C.; Gong, S.; McOwan, P.W. Fusing gait and face cues for human gender recognition. Neurocomputing 2008, 71, 1931–1938.
[CrossRef]
25. Ammour, B.; Bouden, T.; Boubchir, L. Face–iris multi-modal biometric system using multi-resolution Log-Gabor filter with
spectral regression kernel discriminant analysis. IET Biom. 2018, 7, 482–489. [CrossRef]
26. Khan, M.K.; Zhang, J. Multimodal face and fingerprint biometrics authentication on space-limited tokens. Neurocomputing 2008,
71, 3026–3031. [CrossRef]
27. Xiong, Q.; Zhang, X.; Xu, X.; He, S. A Modified Chaotic Binary Particle Swarm Optimization Scheme and Its Application in
Face-Iris Multimodal Biometric Identification. Electronics 2021, 10, 217. [CrossRef]
28. Frischholz, R.W.; Ulrich, D. BiolD: A multimodal biometric identification system. Computer 2000, 33, 64–68. [CrossRef]
29. Conti, V.; Militello, C.; Sorbello, F.; Vitabile, S. A Frequency-based Approach for Features Fusion in Fingerprint and Iris Multimodal
Biometric Identification Systems. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2010, 40, 384–395. [CrossRef]
30. Soleymani, S.; Dabouei, A.; Kazemi, H.; Dawson, J.; Nasrabadi, N.M. Multi-Level Feature Abstraction from Convolutional
Neural Networks for Multimodal Biometric Identification. In Proceedings of the 2018 24th International Conference on Pattern
Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3469–3476.
31. Aghajari, Z.H.; Teshnehlab, M.; Motlagh, M.R.J. A novel chaotic hetero-associative memory. Neurocomputing 2015, 167, 352–358.
[CrossRef]
32. Han, Q.; Wu, Z.; Deng, S.; Qiao, Z.; Huang, J.; Zhou, J.; Liu, J. Research on Face Recognition Method by Autoassociative Memory
Based on RNNs. Complexity 2018, 2018, 8524825. [CrossRef]
33. Hamada, Y.M. Liapunov’s stability on autonomous nuclear reactor dynamical systems. Prog. Nucl. Energy 2014, 73, 11–20.
[CrossRef]
34. Han, Q.; Liao, X.; Huang, T.; Peng, J.; Li, C.; Huang, H. Analysis and design of associative memories based on stability of cellular
neural networks. Neurocomputing 2012, 97, 192–200. [CrossRef]