Next Article in Journal
Mask-Type Sensor for Pulse Wave and Respiration Measurements and Eye Blink Detection
Next Article in Special Issue
Learning to Combine Local and Global Image Information for Contactless Palmprint Recognition
Previous Article in Journal
Concept of an In-Plane Displacement Sensor Based on Grating Interferometry with a Stepwise Change of Sensitivity
Previous Article in Special Issue
A Multifeature Learning and Fusion Network for Facial Age Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Triple-Type Feature Extraction for Palmprint Recognition

1
School of Mathematics and Big Data, Guizhou Education University, Guiyang 550018, China
2
Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen 518055, China
3
School of Computer, Guangdong University of Technology, Guangzhou 510006, China
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(14), 4896; https://doi.org/10.3390/s21144896
Submission received: 22 June 2021 / Revised: 8 July 2021 / Accepted: 12 July 2021 / Published: 19 July 2021
(This article belongs to the Special Issue Biometric Systems for Personal Human Recognition)

Abstract

:
Palmprint recognition has received tremendous research interests due to its outstanding user-friendliness such as non-invasive and good hygiene properties. Most recent palmprint recognition studies such as deep-learning methods usually learn discriminative features from palmprint images, which usually require a large number of labeled samples to achieve a reasonable good recognition performance. However, palmprint images are usually limited because it is relative difficult to collect enough palmprint samples, making most existing deep-learning-based methods ineffective. In this paper, we propose a heuristic palmprint recognition method by extracting triple types of palmprint features without requiring any training samples. We first extract the most important inherent features of a palmprint, including the texture, gradient and direction features, and encode them into triple-type feature codes. Then, we use the block-wise histograms of the triple-type feature codes to form the triple feature descriptors for palmprint representation. Finally, we employ a weighted matching-score level fusion to calculate the similarity between two compared palmprint images of triple-type feature descriptors for palmprint recognition. Extensive experimental results on the three widely used palmprint databases clearly show the promising effectiveness of the proposed method.

1. Introduction

As one of the most important solutions for performing personal authentication our modern society, biometric recognition can effectively and efficiently identify an individual based on one’s physiological or behavioral traits [1,2,3]. There have been various biometric recognition technologies such as face, fingerprint and gait recognition technologies, which has been successfully used for many practical applications such as mobile payment, electronic control and security checking [4,5,6]. In recent years, as a relatively new emerging biometric technology, palmprint recognition has received tremendous research interest, because it contains many discriminative and reliable features such as principal lines and rich textures [7,8,9]. Moreover, palmprint recognition is a non-invasive and hygienic biometric technology [10,11], which make people prefer to use palmprint for personal authentication, especially in the current outbreak of COVID-19. Therefore, a growing number of studies turn to the important and challenging palmprint recognition technology [12,13,14,15,16].
There have been many palmprint recognition methods proposed in the past decades, which can be roughly classified into three categories according the types of palmprint images [17]: high-resolution palmprint [18], low-resolution palmprint [7] and three-dimensional (3D) palmprint recognition [19] methods. High-resolution palmprint recognition methods generally extract the ridge directions, ridge densities and the minutiae points for personal authentication for forensic applications [18]. 3D palmprint recognition mainly extracts the 3D surface features of palm surface such as curvature features. In general, both high-resolution palmprint and 3D palmprint images need to be captured with special-designed and expensive devices, which make them impractical for commercial and civil applications. For these reasons, more efforts were devoted to low-resolution palmprint recognition in recent years, where the low-resolution palmprint images can be easily captured by using the common image acquisition equipments such as mobile phone and normal camera. In this paper, we also focus on low-resolution palmprint feature extraction and recognition, and palmprint recognition of this paper refers to low-resolution palmprint recognition unless otherwise stated.
So far, there have been extensive methods proposed for palmprint recognition in the literature. The original palmprint recognition methods usually extract the visible texture-based feature-based features of palmprint such as local binary pattern (LBP) based feature descriptor [20], and the visible line feature-based methods such as the principal lines and wrinkles [21,22]. Most existing studies [17] have shown that the coding-based methods are one of the most effective palmprint representations. Due to these, a number of coding-based methods were proposed for palmprint recognition [23,24,25,26,27,28,29,30]. The original code method was proposed by Kong et al. [23], which encoded the Gabor filtering responses of palmprint images based on winner-take-all rule and achieved promising palmprint verification performance. Inspired by that, Guo et al. [24] proposed a binary orientation co-occurrence vector (BOCV) method by encoding the Gabor filtering responses on multiple directions. After that, Zhang et al. [25] extended the BOCV (EBOCV) method by filtering out the fragile bits of the BOCV codes. Further, Fei et al. [26] proposed a double orientation code (DOC) method by encoding the direction features of the top-two filtering responses. Zhang et al. [9] and Fei et al. [17] made two surveys surveys for the representative palmprint recognition methods. Moreover, the subspace learning, sparse representation and deep-learning were also successfully used for palmprint recognition. For example, Lu et al. [31] proposed eigenpalm method by extracting the PCA data of palmprint. Imad et al. [32,33] proposed a sparse representation-based method by extracting the subspace features such as PCA and LDA information and establishing an ensemble sparse representation. Svoboda et al. [34] comparatively studied the deep convolutional neural networks for palmprint recognition. Fei et al. [35] proposed a binary code learning method for heterogeneous palmprint recognition by learning the complementary features of multiple modalities of palmprint images. In addition, Genovese et al. [36] proposed a PalmNet method by applying the Gabor responses and PCA into the convolutional networks. In general, these learning-based methods usually require many labeled samples to learn and extract the discriminative features. For example, subspace-based methods require enough labeled samples to measure the within-class and between-class distances. Representation based methods need a plenty of labeled samples to completely represent the query samples, and the deep-learning methods require a massive training samples to fine-turn a plenty of parameters of the networks. Therefore, how to directly extract the discriminative features of palmprint images remains an interesting and important challenge.
In general, a palmprint image contains the principal lines, wrinkles and ridge patterns, which are the most important characteristics of a palmprint. Of them, the principal line and wrinkles has visible edges and thus they carry gradient-based features and direction-based features. In addition, ridge-based information of a palmprint forms the texture features. In other words, the direction-based, gradient-based and texture-based features are the most important and distinct features of a palmprint. Motivated by this, in this paper, we propose a triple-type feature descriptor (TFD) for palmprint representation and recognition. Figure 1 shows the basic idea of the proposed method. First, we extract the texture features by encoding the top-two maximum pixel distance within a local neighborhood, extract the gradient features by encoding the top-two edge responses of palmprint images, and further extract the direction features by encoding the most dominant direction and the most reliable direction. Then, to make the triple-features invariant to small misalignment, we calculate the block-wise histograms of the triple-type feature codes and cluster them into feature vectors, respectively, as the final feature descriptors of the palmprint. Finally, we employ the weighted matching score fusion to fuse triple-type feature for palmprint recognition. Extensive experimental results on three widely used contactless palmprint databases clearly demonstrate the effectiveness of the proposed method.
The main contribution of this paper can be summarized as follows:
  • We propose a new palmprint descriptor by extracting triple-type inherent features of palmprint image. Unlike single-type feature descriptor, our proposed method can completely represent the multiple important and inherent characteristics of palmprint images.
  • Unlike the recently learning-based methods which require many training samples, our proposed method can effectively extract the discriminative feature manually without requiring any training samples, such that our proposed method is suitable for the few-shot and even zero-shot biometric recognition tasks.
  • We conduct both palmprint verification and palmprint identification experiments on three widely used challenging databases and the experimental results demonstrate that our proposed method consistently outperforms previous state-of-the-art methods.
The rest of this paper is organized as follows. Section 2 briefly the related topics. Section 3 elaborates our proposed triple-type feature descriptors for palmprint recognition. Section 4 presents the experimental results. Section 5 offers the concluding remarks.

2. Related Work

In this section, we first introduce the preprocessing of palmprint images. Then, we briefly review the representative palmprint feature extraction methods. Lastly, we introduce the fusion schemes of multiple biometrics.

2.1. Preprocessing of Palmprint Images

In general, the original palmprint images are captured from the whole hands, which usually consist of the all hand as well as the background images. Due to this, an original captured palmprint image needs to be preprocessed to crop the center part of the original palmrpint image. In other words, we need to extract the region of interest (ROI) of the original palmprint image before performing palmprint feature extraction and recognition [7]. So far, there have been several ROI extraction methods for palmprint image preprocessing, such as the PalmCode and Deep-learning methods [7,37,38]. Particularly, the PalmCode method [7] is one the most popular ROI extraction method. Specifically, it first detects the boundary of the whole palm from the original palmprint image by using thresholding principle. Then, it dominates the crossing valley points between fingers as the reference points, based on which a rectangular coordinate system can be established. After that, the center part of the palm image is further located based on the coordinate system, and cropped as the ROI of the palmprint image. In this paper, we use the PalmCode method to extract the ROIs of the palmprint images for the experiments.

2.2. Feature Extraction for Palmprint Representation

There have been a number of feature extraction methods for palmprint representation which can be roughly classified into two categories: heuristic feature representation and learning feature representation. The heuristic feature representation methods mainly extract the hand-crafted features of palmprint images such as the lines and direction features [39,40]. For example, Huang et al. [21] proposed a modified finite randon transform (MFRAT) method to extract the three principal lines of palmprint for personal verification. Wu et al. [22] proposed a DoG method to extract both principal lines and wrinkles for palmprint representation. In addition, Dai et al. [20] designed a multiple band wavelet-based method to extract the texture features of palmprint images. In recent years, more efforts were devoted into learning-based palmprint representation due to the impressive performance of deep learning on image classification [36,38]. For example, Genovese et al. [36] proposed a PalmNet method by using CNN for palmprint recognition. In addition, Fei et al. [38] proposed a binary code learning method to extract the discriminative features for palmprint recognition. Furthermore, Fei et al. [41] proposed a multi-feature learning method for palmprint recognition by learning the complementary binary codes of multiple types of features.
In general, the learning-based methods usually require enough labeled samples to training the feature extraction model. Due to this, there are still many studies focus on hand-crafted feature extraction of palmprint images, and the direction-based coding methods are one of the most effective hand-crafted feature representations [27]. The most typical direction coding methods are the dominant direction-based feature descriptors such as competitive code [23], discriminative and robust competitive code (DRCC) [39], and robust line orientation code (RLOC) [27] methods. They usually first define a series of feature extraction templates, such as Gabor filters and MFRAT, to convolve a palmprint image to compute the direction feature responses. Then, they treat the direction of the template that has the maximum convolution response as the dominant direction of the palmprint image and convert them into feature codes. For example, the competitive code method used six Gabor filters as the template and used the direction index of the Gabor filter producing the maximum filtering response as the direction features. Unlike the competitive code, the RLOC method used the MFRAT as the templates to extract the dominant direction features based on the same winner-take-all rule as the competitive code. The DRCC method improved the competitive code method by using the filtering results within a local region to extract the robust dominant direction features. Different from the dominant direction feature descriptors, there have also a number of methods that extract multiple direction features of palmprint images. For example, the BOCV method calculated the convolution responses between the direction feature templates and the palmprint image, and converted the convolution results on all directions into multiple feature codes for palmprint recognition. Moreover, the E-BOCV method filtered out the fragile bits with small direction responses from the BOCV code maps. In addition, the double orientation code method [26] useed twelve Gabor filters with different directions to convolve with a palmprint image. The two maximum responses of the two dominant directions were encoded into decimal codes. More direction feature extraction methods were comparatively studied in the palmprint survey literature [9,27].

2.3. Multiple Feature Fusion

It is a widely used and effective way that fuses multiple features to improve the overall biometric recognition performance. There have been four typical fusion schemes for multiple features fusion: sensor level, feature level, matching score level and decision level [42,43,44] fusion. In general, sensor level fusion scheme [42] uses different kinds of sensors such as different cameras to capture multiple samples of the same biometric trait, which are then fused by using other kinds of fusion schemes. Feature level fusion [44] mainly extracts different kinds of features by using different methods and then concatenates these features for representation and matching. In addition, decision level fusion scheme means that different decision makers such as different classification methods are used, based on which the final decision is made based on some strategy such as voting scheme. By contrast, score level fusion scheme is to first extract multiple features, and then to perform feature matching of them respectively. Finally, the matching results of different features are combined to calculate the overall matching result. So far, the score level fusion is one of the most widely used fusion scheme because it is a simple yet effective scheme in multiple feature fusion. In this paper, we use a weighted matching score level fusion to fuse the triple-type features of palmprint images for palmmprint recognition.

3. Triple-Type Feature Encoding and Matching

In this section, we first introduce the triple-type feature extraction of our proposed method. Then, we detail the matching fusion of the triple-type features.

3.1. Texture Feature Extraction of Palmprint Images

It is well recognized that texture is one of the most important characteristics of a palmprint image. Motivated by this, we first extract the texture features from palmprint images. Different most existing LBP-like texture descriptors [20], we propose a robust texture feature descriptor by selecting the neighbors with the maximum distances to the center point in the local 3 × 3 neighborhood. Specifically, we first calculate the absolute gray value differences between the center point and its eight neighbors, obtaining eight gray value differences: d i ( i = 1 , 2 , , 8 ) . Then, we sort these gray value differences in descending order. Third, we select the two points which has the two maximum gray value differences with the center point, and encode the direction number of them into texture feature code, as follows:
T c o d e = ( m 1 1 ) × 8 + ( m 2 1 ) ,
where T c o d e represents the texture feature code of the center point of the palmprint image. m 1 and m 2 denotes direction numbers of the two neighbor points with the top-two gray value differences. Specifically, m 1 and m 2 can be calculated as follows:
m 1 = arg max j { d j } ,
and
m 2 = arg max j { { d j } d m 1 }
If two neighbor points have the same pixel distances with the center point, we select the smaller direction number to encode the texture feature. It is easy to check that is ranging from 1 to 62. Because the smallest direction number combination is {1, 2}, which is encoded into 1. The largest direction number combination is {8, 7}, which is encoded into 62. Figure 2 illustrates the basic idea of the robust texture feature extraction scheme.
It can be seen that the two neighbor points have the larger gray value differences with the center point than the other neighbors, so that they are more robust to small random noises. Therefore, our proposed texture feature extraction method can better represent the texture data of palmprint images.

3.2. Gradient Feature Extraction of Palmprint Images

It is seen that a palmprint consists of rich textures, lines and wrinkles, which carry obvious edge information. To better represent there edge features, we employ the edge operators to exploit gradient features from palmprint images. In this paper, we use the simple yet effective Kirsch operator [13] to extract the gradient features. Specifically, we first employ eight Kirsch operators on eight different directions, and convolve them with the palmprint image to produce eight edge responses. Then, we sort the absolute values of these edge responses on eight directions. Finally, we select the direction numbers of the templates with the two maximum edge responses, and encode them as follows:
G c o d e = ( p 1 1 ) × 8 + ( p 2 1 ) ,
where p 1 and p 2 denotes direction number of the Kirsch templates with the top-two absolute edge responses, and they can be similarly calculated as (2) and (3). G c o d e represents the gradient feature codes, which is also ranging from 1 to 62 as the T c o d e . Figure 3 shows the main procedure of the gradient feature extraction and representation, where the direction number of templates is ordered as Figure 2.

3.3. Direction Feature Extraction of Palmprint Images

From the overview of palmprint descriptors, the directions have served as one of the most important and discriminative features and achieved promising effectiveness for palmprint recognition. Motivated by this, in this paper we also extract the direction features. In general, the common way of direction feature extraction first defines a group of direction-based templates, and then convolves these templates with the palmprint image to obtain the direction responses. Finally, it engineers different encoding schemes to convert the direction responses into the direction feature codes for palmprint representation and recognition.
There have been various direction-based templates that were used for direction feature extraction of palmprint, such as Gabor filters, Gaussian filters and MFRAT. Most existing studies [17] have shown that the Gabor filter-based templates can better characterize the direction characteristics, and thus can effectively extract the direction features of a palmprint. Due to this, in this paper, we also employ the Gabor filters as the direction-based templates to exploit the direction features of palmprint images. Specially, we first define twelve direction-based templates based on the real parts of the Gabor filters with the direction of θ j = ( j 1 ) π / N θ ( j = 1 , , N θ ) , where N θ = 12 denotes the number of templates as well as the direction number. Suppose G ( θ ) represents the direction-based templates with direction θ , and I represents a palmprint image, we first calculate the direction responses of the palmprint by convolving these direction-based templates with the palmprint image as follows:
c j ( x , y ) = G ( θ j ) I ( x , y ) , ( j = 1 , 2 , , N θ ) ,
where “*” represents the convolution symbol and c j represents the direction responses of the palmprint on the direction of θ j .
To better capture the discriminative direction features, we first select the direction with the maximum convolving response, referred to as q 1 , which represents the most dominant direction features of the palmprint. Furthermore, we select the direction with the maximum direction response interval to its neighboring directions, where the direction response interval (DRI) of a direction can be calculated as follows:
D R I j ( x , y ) = | r j ( x , y ) r φ ( j ) ( x , y ) | + | r j ( x , y ) r ϕ ( j ) ( x , y ) | ,
where φ ( j ) and ϕ ( j ) denotes the two nearest neighboring directions of the direction θ j . φ ( j ) equals N θ if j = 1 , and ( j 1 ) otherwise. ϕ ( j ) equals 1 if j = N θ , and ( j + 1 ) otherwise. Therefore, the direction with the maximum direction response interval, referred to as q 2 , can be selected as:
q 2 = arg max j D R I j { j = 1 , 2 , , N θ } .
Previous studies have shown that the most dominant direction (i.e., q 1 ) has the competitive discriminative power. In addition, the direction with the maximum DRI has good reliability because it is hard to be affected by the neighboring directions. Due to these, we encode these two directions with the maximum convolving response and the maximum DRI as the direction features of the palmprint:
D c o d e ( x , y ) = ( q 1 1 ) × N θ + q 2 ,
where D c o d e represents the direction feature code. Figure 4 shows the basic idea of forming the direction feature code of a palmprint image.

3.4. Feature Matching Fusion

In general, feature level and matching score level are the most two effective schemes for multiple features fusion. It is note that different types of palmprint features have different discriminative power, and the feature-level fusion cannot better describe different features. Due to this, in this paper, we employ the matching score level fusion scheme to fuse our proposed triple-type palmprint features. Specifically, we first form block-wise histogram feature descriptors for triple types of palmprint features. Then, we fuse the matching score of triple types of feature descriptors by setting different weights for them. In the following, we detail the feature descriptor formation and weighted matching score-level fusion procedures.
It is well seen that different regions of a palmprint usually have obvious different vision characters. For example, some regions have more widely distributed lines and wrinkles, so that these regions contain more gradient and direction features. In order to overcome the small misalignment and achieve invariant palmprint representation, in this paper, we form block-wise histogram feature vectors as the feature descriptors for palmprint representation. Specifically, given a palmprint image, we first calculate the triple feature code maps of the palmprint image. Then, we divide there palmprint feature maps into non-overlapping blocks, the sizes of which are empirically set to 16 × 16 pixels. Third, we calculate the histograms of the texture feature codes, gradient feature codes and the direction feature codes for each block, respectively. Finally, we concatenate the block-wise feature code histograms into three feature vectors for triple type of features, respectively, so that triple-type feature descriptors, named as TFD, are obtained for a palmprint image.
Having obtained the triple types of feature descriptors of palmprint images, we use the simple yet effective Chi-square distance to calculate the similarity of two palmprint images. We first calculate the Chi-square distance for each pair of triple feature descriptors of the two compared palmprint images. Then, we fuse the triple Chi-square distances to form the final matching score of them as follows.
S ( u , v ) = i = 1 3 w i S i ( u , v ) ,
where S ( u , v ) denotes the matching score of the compared feature descriptors, u and v, of two compared palmprint images. u i and v i ( i = 1 , 2 , 3 ) corresponding the feature descriptors of the texture, gradient, and direction features, respectively. w i > 0 ( i = 1 , 2 , 3 ) are the weighted parameters to make a suitable tradeoff the matching results of the triple feature descriptors, and i = 1 3 w i = 1 . S i ( i = 1 , 2 , 3 ) represent the matching score calculated by using the Chi-square distance based on the i th feature descriptors, including the texture, gradient and direction feature descriptors, respectively, which can be calculated as follows:
S i ( u , v ) = k = 1 N i ( u i , k v i , k ) 2 u i , k + v i , k ,
where u i , k and v i , k denotes k th bin of the u i and v i , respectively. N i is the feature size of the u i and v i feature descriptor. Therefore, a smaller matching score of S ( u , v ) means of higher similarity of the two compared palmprint images. By doing this, the similarity of two palmprint images can be easily calculated for palmprint recognition.

4. Experiment

In this section, we first introduce three widely used palmprint image databases for our experiments, including the CASIA, IITD and GPDS palmprint image databases. Then, we conduct palmprint verification and identification experiments and analyze the experimental results. Finally, we analyze the computational time cost of the proposed method. All experiments are conducted under a platform including a PC with a double-core Intel(R) i7-7700 (3.60 GHz) CPU and 16 GB RAM.

4.1. Databases

The CASIA palmprint image database [45] consists of 5502 palmprint images captured with normal camera from both the left and right palms of 312 subjects. Each palm provided about 8 to 17 samples of the whole hands. Therefore, the CASIA database contains 612 different classes of palmprint images, so that the IITD database consists of 460 different classes of palmprint images because the samples of the left and right hands are considered as different classes. In this experiment, we used the Palmcode method to extract the ROIs of the palmprint images and resized them into 128 × 128 pixels.
The IITD palmprint image database [46] consists of 2601 contactless palmprint images captured from 230 individuals of the both left and right hands, each of which provided 5 or 6 images. Therefore, the IITD database consists of 460 different classes of palmprint images because the samples of the left and right hands are considered as different classes. All the palmprint images were captured by using common camera in a box, so that the hand poses such as rotation and translation are different. The ROIs with the sizes of 128 × 128 pixels have already been cropped and available in the database.
The GPDS palmprint image database [47] consists of 1000 contactless palmprint images of 100 subjects of the right hands, each of which contains 10 palmprint images as well as theirs ROIs. In the experiments, all ROIs of the GPDS databases were resized into 128 × 128 pixels.
Figure 5 presented some typical samples of the palmprint images selected from the CASIA, IITD and GPDS databases. It is seen that that different samples of different databases shows very different characteristics.

4.2. Palmprint Verification Results

In this subsection, we conduct palmprint verification experiments on the CASIA, IITD and GPDS databases. In general, palmprint verification is to compare a query image with a labeled image to verify whether the query image is from the same individual as the labeled image or not. In this experiment, we compare each pair of palmprint images from the same database. A compare is named as genuine matching if the two compared palmprint image are from the same palm, and otherwise called as an impostor matching. Then, we calculate the false acceptance rate (FAR) and genuine acceptance rate (GAR) [7] on each database to evaluate the proposed method. For a fair evaluation, we also implement several state-of-the-art palmprint presentation and recognition methods, such as competitive code [23], ordinal code [30], HOL [29], DoN [28], E-BOCV [25], DRCC [39] and ALDC [40] methods, and compared them with our proposed method. For our proposed TFD method, we empirically set weighted parameters: w 1 , w 2 and w 3 to 0.1, 0.1 and 0.8, respectively. We will comparatively analyze the settings of these parameters in Section 4.4.
Figure 6 shows the ROC curves, i.e., FAR versus GAR, of our proposed method and the other compared methods. We can see that our proposed method achieves comparable and even better GAR than all the seven compared methods against the same FAR. This is because our proposed method can extensively exploit multiple features of palmprint images, which can provide more informative information for palmprint representation. By contrast, the conventional heuristic palmprint recognition methods such as competitive code, E-BOCV, DoN, only exploit the direction features, which cannot complete represent the texture and gradient information of palmprint images. In addition, we see that the proposed method achieves a slight better performance on the IITD database than that on the CASIA and GPDS databases. The possible reason is that the palmprint images of the IITD consists of more distinct line patterns than the samples of the CASIA and GPDS databases. These distinct patterns can provide more discriminative gradient-based and direction-based features, such that a better recognition accuracy rate can be obtained on the IITD database.

4.3. Palmprint Identification Results

Different palmprint verification, palmprint identification is to compare a query sample with a group of labeled samples and aims to identify the label of the query sample. In the palmprint identification experiment, for each database, we first randomly select n palmprint images per each palm to form a gallery sample set, and use the rest as the query samples, where n is set to 1 to 4, respectively. Then, we calculate the rank-one identification accuracy rates of the proposed method and the conventional representative methods. Specifically, we compare the proposed method with representative palmprint recognition methods including the competitive code, ordinal code, HOL, DoN, EBOCV, DRCC and ALDC. Moreover, the representative texture descriptors such as LBP [48] and LDP [49] were also implemented and compared. For fair comparisons, all methods were repeated 10 times and the average identification accuracy rates were reported. Table 1 tabulates the average rand-one identification results of the different methods on the CASIA, IITD, and GPDS databases.
From Table 1, we can see that our proposed method consistently outperforms the nine compared by achieving obviously higher rand-one identification rates than the others. Specifically, our proposed method achieves much better accuracy rates than the direction-based palmprint recognition methods such as competitive code, DoN and DRCC. The possible reason is that, compared with the direction-based methods, our proposed method not only extracts the direction-type features but also exploits the texture-type and gradient-type features, which can provide more informative and discriminative features over the direction features, so that higher identification accuracy rates can be obtained. Moreover, our proposed method significantly improves the identification accuracy rates over the LBP and LDP methods. This is because the LBP and LDP only describe the single-type features such as texture-based and edge-based features. By contrast, our proposed method can extract more direction-based and gradient-based features, which have shown promising discriminative power for palmprint recognition, such as a better recognition accuracy rate can be obtained.

4.4. Parameter Analysis

Our proposed method extracts triple types of palmprint features and fuse them in a weighted matching score level fusion scheme. To evaluate the importance and discriminative power of different types of features, we set different values of weighted parameters and compare the identification performance of the proposed method. It is impractical to enumerate all possible values for these parameters. Due to this, in this experiment, we first set w 1 with the values ranging from 0 to 1 with the interval of 0.1, and set w 2 from 0 to 1 w 1 with the interval of 0.1. Accordingly, we set the w 3 as: w 3 = 1 w 1 w 2 . After that, we perform palmprint identification with the proposed method on the CASIA, IITD and GPDS databases, where one palmprint image per palm was randomly selected as the gallery sample. Figure 7 describes the comparative accuracy rates of the proposed method versus different values of the parameters.
We can see from Figure 7 that the proposed method usually performs the best when w 1 and w 2 are set to around 0.1 to 0.3, and accordingly w 3 is set to about 0.7 to 0.8. This demonstrates that the direction features of palmprint images usually convey more discriminative features than the other two types of features. This is consistent with the existing studies that the state-of-the-art palmprint methods usually extract the direction features for palmprint recognition. In addition, the experimental results also clearly show that, by combining the triple types of features with set suitable weights, the proposed method can achieve obvious better recognition performance than the single-type feature representation, demonstrating the effectiveness of the proposed method.

4.5. Computational Time Analysis

To evaluate the efficiency of feature extraction of our proposed method, in this subsection, we calculate the time cost of our proposed method for feature extraction. Moreover, we compared the computational cost of the proposed method with state-of-the-art feature extraction methods such as competitive code, ordinal code, EBOCV, DoN and DRCC methods. For a fair evaluation and comparison, we extract the features of 100 palmprint images based on different methods and report the average time taken of feature extraction for a palmprint image. Table 2 summarizes the average feature extraction time taken of different methods. We can see that the proposed method achieves a slight higher time cost than the compared methods. The possible reason is that our proposed method extracts triple times of features than the other feature descriptors. It is worth noting that our proposed method takes about 0.05 s for feature extraction for a palmprint image, which is acceptable for practical applications. Therefore, since our proposed method can significantly improve the recognition accuracy over the existing methods, our proposed TFD method can make a good tradeoff when the recognition effectiveness and efficiency are both concerned.

5. Conclusions

In this paper, we propose a triple-type feature descriptor for palmprint recognition. To completely exploit the discriminative features, our proposed method respectively extracts the texture-type, gradient-type and direction-type features, which are the most important components of a palmprint image. Then, we use the simple and effective matching score level fusion to combine triple-type features for palmprint matching. Extensive experiments on three challenging palmprint databases clearly show that our proposed method outperforms previous palmprint descriptors. For future work, we will further explore other types of hand-crafted features to further improve the performance for palmprint recognition.

Author Contributions

L.W. conceived of the presented idea and wrote 80% content of this paper. Y.X. also conceived the main idea and revised this paper. Z.C. carried out the experiments and wrote 20% content of this paper. Y.Z. revised this paper and provided helpful comments for this paper. S.Z. conducted some experiments and revised this paper. L.F. conceived of the idea and provided some comments. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partly supported by Key Disciplines of Guizhou Province-Computer Science and Technology (No. ZDXK[2018]007), the Science and Technology Foundation of Guizhou Province (No. QianKeHeJiChu-ZK[2021]YiBan 334), Research Projects of Innovation Group of Guizhou Provincial Department of Education (No. QianJiaoHeKY[2021]022), and Guizhou Provincial Service Industry Development Guide fund project in 2018 (The Third Batch, No. QianFaGaiFuWu[2018]1181).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Zhang, D. Automated Biometrics: Technologies and Systems; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  2. Rida, I.; Máadeed, N.; Máadeed, S.A.; Bakshi, S. A comprehensive overview of feature representation for biometric recognition. Multimed. Tools Appl. 2020, 79, 4867–4890. [Google Scholar] [CrossRef]
  3. Fei, L.; Zhang, B.; Tian, C.; Teng, S.; Wen, J. Jointly learning multi-instance hand-based biometric descriptor. Inf. Sci. 2021, 562, 1–12. [Google Scholar] [CrossRef]
  4. Lu, J.; Liong, V.; Zhou, J. Simultaneous local binary feature learning and encoding for homogeneous and heterogeneous face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1979–1994. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, F.; Zhao, Y.; Liu, G.; Shen, L. Fingerprint pore matching using deep features. Pattern Recognit. 2020, 102, 107208. [Google Scholar] [CrossRef]
  6. Rida, I.; Máadeed, N.; Máadeed, S. Robust gait recognition: A comprehensive survey. IET Biom. 2019, 8, 14–28. [Google Scholar] [CrossRef]
  7. Zhang, D.; Kong, W.K.; You, J.; Wong, L.M. Online palmprint identification. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1041–1050. [Google Scholar] [CrossRef] [Green Version]
  8. Zhang, D.; Kong, W.K.; You, J.; Wong, L.M. A survey of palmprint recognition. Pattern Recognit. 2009, 42, 1408–1418. [Google Scholar]
  9. Zhang, D.; Zuo, W.; Yue, F. A comparative study of palmprint recognition algorithms. ACM Comput. Surv. 2012, 44, 1–37. [Google Scholar] [CrossRef]
  10. Fei, L.; Xu, Y.; Zhang, B.; Fang, X.; Wen, J. Low-rank representation integrated with principal line distance for contactless palmprint recognition. Neurocomputing 2016, 218, 264–275. [Google Scholar] [CrossRef]
  11. Zhang, L.; Li, L.; Yang, A.; Shen, Y.; Yang, M. Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognit. 2017, 69, 199–212. [Google Scholar] [CrossRef]
  12. Jia, W.; Zhang, B.; Lu, J.; Zhu, Y.; Zhao, Y.; Zuo, W.; Ling, H. Palmprint recognition based on complete direction representation. IEEE Trans. Image Process. 2017, 26, 4483–4498. [Google Scholar] [CrossRef] [PubMed]
  13. Luo, Y.; Zhao, L.; Zhang, B.; Jia, W.; Xue, F.; Lu, J.; Zhu, Y.; Xu, B. Local line directional pattern for palmprint recognition. Pattern Recognit. 2016, 50, 26–44. [Google Scholar] [CrossRef]
  14. Zhao, S.; Zhang, B. Learning complete and discriminative direction pattern for robust palmprint recognition. IEEE Trans. Image Process. 2021, 30, 1001–1014. [Google Scholar] [CrossRef] [PubMed]
  15. Máadeed, S.; Jiang, X.; Rida, I. Palmprint identification using sparse and dense hybrid representationm. Multimed. Tools Appl. 2019, 78, 5665–5679. [Google Scholar] [CrossRef]
  16. Rida, I.; Máadeed, S.; Jiang, X.; Fei, L. An Ensemble Learning Method Based on Random Subspace Sampling for Palmprint Identification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; pp. 2047–2051. [Google Scholar]
  17. Fei, L.; Lu, G.; Jia, W.; Teng, S.; Zhang, D. Feature extraction methods for palmprint recognition: A Survey and Evaluation. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 346–363. [Google Scholar] [CrossRef]
  18. Dai, J.; Feng, J.; Zhou, J. Robust and efficient ridge-based palmprint matching. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1618–1632. [Google Scholar]
  19. Fei, L.; Zhang, B.; Jia, W.; Wen, J. Feature extraction for 3-D palmprint recognition: A Survey. IEEE Trans. Instrum. Meas. 2020, 69, 645–656. [Google Scholar] [CrossRef]
  20. Dai, Q.; Bi, N.; Huang, D. M-band wavelets application to palmprint recognition based on texture features. In Proceedings of the International Conference on Image Processing, Singapore, 24–27 October 2004; pp. 893–896. [Google Scholar]
  21. Huang, D.S.; Jia, W.; Zhang, D. Palmprint verification based on principal lines. Pattern Recognit. 2008, 41, 1316–1328. [Google Scholar] [CrossRef]
  22. Wu, X.; Zhang, D.; Wang, K. Palm Line Extraction and Matching for Personal Authentication. IEEE Trans. Syst. Man Cybern. 2006, 36, 978–987. [Google Scholar]
  23. Kong, A.W.-K.; Zhang, D. Competitive Coding Scheme for Palmprint Verification. In Proceedings of the International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 520–523. [Google Scholar]
  24. Guo, Z.; Zhang, D.; Zhang, L. Palmprint verification using binary orientation co-occurrence vector. Pattern Recognit. Lett. 2009, 30, 1219–1227. [Google Scholar] [CrossRef]
  25. Zhang, L.; Li, H.; Niu, J. Fragile Bits in Palmprint Recognition. IEEE Signal Process. Lett. 2012, 19, 663–666. [Google Scholar] [CrossRef]
  26. Fei, L.; Xu, Y.; Tang, W. Double-orientation code and nonlinear matching scheme for palmprint recognition. Pattern Recognit. 2016, 49, 89–101. [Google Scholar] [CrossRef]
  27. Jia, W.; Huang, D.; Zhang, D. Palmprint verification based on robust line orientation code. Pattern Recognit. 2008, 41, 1504–1513. [Google Scholar] [CrossRef]
  28. Zheng, Q.; Kumar, A.; Pan, G. A 3D feature descriptor recovered from a single 2D palmprint image. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1272–1279. [Google Scholar] [CrossRef] [PubMed]
  29. Jia, W.; Hu, R.X.; Lei, Y.K.; Zhao, Y.; Gui, J. Histogram of oriented lines for palmprint recognition. IEEE Trans. Syst. 2014, 44, 385–395. [Google Scholar] [CrossRef]
  30. Sun, Z.; Wang, L.; Tan, T. Ordinal feature selection for iris and palmprint recognition. IEEE Trans. Image Process. 2014, 23, 3922–3934. [Google Scholar] [CrossRef] [PubMed]
  31. Lu, G.; Zhang, D.; Wang, K. Palmprint recognition using eigenpalms features. Pattern Recognit. Lett. 2003, 24, 1463–1467. [Google Scholar] [CrossRef]
  32. Rida, I.; Máadeed, S. Palmprint identification using an ensemble of sparse representations. IEEE Access 2018, 6, 3241–3248. [Google Scholar] [CrossRef]
  33. Rida, I.; Hérault, R.; Marcialis, G.; Gasso, G. Palmprint recognition with an efficient data driven ensemble classifier. Pattern Recognit. Lett. 2019, 126, 21–30. [Google Scholar] [CrossRef]
  34. Svoboda, J.; Masci, J.; Bronstein, M.M. Palmprint recognition via discriminative index learning. In Proceedings of the International Conference on Pattern Recognition, Cancún, Mexico, 4–8 December 2016; pp. 1–6. [Google Scholar]
  35. Fei, L.; Zhang, B.; Xu, Y.; Tian, C.; Imad, R.; Zhang, D. Jointly heterogeneous palmprint discriminant feature learning. IEEE Trans. Neural Networks Learn. Syst. 2021, 1–12. [Google Scholar] [CrossRef]
  36. Genovese, A.; Piuri, V.; Plataniotis, K.; Scotti, F. PalmNet: Gabor-PCA convolutional networks for touchless palmprint recognition. Pattern Recognit. Lett. 2019, 14, 3160–3174. [Google Scholar] [CrossRef] [Green Version]
  37. Aykut, M.; Ekinci, M. Developing a contactless palmprint authentication system by introducing a novel ROI extraction method. Image Vis. Comput. 2015, 40, 65–74. [Google Scholar] [CrossRef] [Green Version]
  38. Fei, L.; Zhang, B.; Xu, Y.; Guo, Z.; Wen, J. Learning discriminant direction binary palmprint descriptor. IEEE Trans. Image Process. 2019, 28, 3808–3820. [Google Scholar] [CrossRef]
  39. Xu, Y.; Fei, L.; Wen, J.; Zhang, D. Discriminative and robust competitive code for palmprint recognition. IEEE Trans. Syst. 2018, 48, 232–241. [Google Scholar] [CrossRef]
  40. Fei, L.; Zhang, B.; Zhang, W.; Teng, S. Local apparent and latent direction extraction for palmprint recognition. Inf. Sci. 2019, 473, 59–72. [Google Scholar] [CrossRef]
  41. Fei, L.; Zhang, B.; Zhang, L.; Jia, W.; Wen, J.; Wu, J. Learning compact multifeature codes for palmprint recognition from a single training image per palm. IEEE Trans. Multimed. 2020, 1–13. [Google Scholar] [CrossRef]
  42. Modak, S.; Jha, V. Multibiometric fusion strategy and its applications: A review. Inf. Fusion 2019, 49, 174–204. [Google Scholar] [CrossRef]
  43. Poh, N.; Kitler, J. A unified framework for biometric expert fusion incorporating quality measures. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 3–31. [Google Scholar] [CrossRef]
  44. Jing, X.; Yao, Y.; Zhang, D. Face and palmprint pixel level fusion and kernel DCV-RBF classifier for small sample biometric recognition. Pattern Recognit. 2007, 40, 3209–3224. [Google Scholar] [CrossRef]
  45. CASIA Palmprint Image Database. Available online: http://biometrics.idealtest.org/ (accessed on 15 January 2005).
  46. IITD Palmprint Image Database (Version1.0). Available online: http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database-Palm.htm (accessed on 15 June 2008).
  47. GPDS Palmprint Image Database. Available online: http://www.gpds.ulpgc.es (accessed on 15 May 2011).
  48. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-Scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  49. Jabid, T.; Kabir, M.; Chae, O. Robust facial expression recognition based on local direction pattern. ETRJ J. 2010, 32, 784–794. [Google Scholar] [CrossRef]
Figure 1. The basic idea of our proposed TFD method for palmprint recognition. We first extract and encode the triples types of features of a palmprint image, including the texture, gradient and direction features. Then, we form histogram-based feature vectors as the palmprint feature descriptors. Finally, we employ the weighted matching score fusion for feature matching and recognition.
Figure 1. The basic idea of our proposed TFD method for palmprint recognition. We first extract and encode the triples types of features of a palmprint image, including the texture, gradient and direction features. Then, we form histogram-based feature vectors as the palmprint feature descriptors. Finally, we employ the weighted matching score fusion for feature matching and recognition.
Sensors 21 04896 g001
Figure 2. The basic procedure of texture feature code calculation of a palmprint image. Given a pixel of a palmprint image, we first calculate the pixel distance between the pixel and its eight neighboring points. Then, we select the two neighboring points with the two maximum pixel distances and encode the position of them into the texture feature code.
Figure 2. The basic procedure of texture feature code calculation of a palmprint image. Given a pixel of a palmprint image, we first calculate the pixel distance between the pixel and its eight neighboring points. Then, we select the two neighboring points with the two maximum pixel distances and encode the position of them into the texture feature code.
Sensors 21 04896 g002
Figure 3. The basic procedure of gradient feature code calculation of a palmprint image. We first define eight Kirsch templates on eight different directions. Then, we calculate the convolution of the eight Kirsch templates with the palmprint image to obtain eights edge responses on eight directions. Finally, we encode the directions of the two templates with the maximum two edge responses into gradient feature code.
Figure 3. The basic procedure of gradient feature code calculation of a palmprint image. We first define eight Kirsch templates on eight different directions. Then, we calculate the convolution of the eight Kirsch templates with the palmprint image to obtain eights edge responses on eight directions. Finally, we encode the directions of the two templates with the maximum two edge responses into gradient feature code.
Sensors 21 04896 g003
Figure 4. The basic procedure of direction feature code calculation of a palmprint image. We first define twelve direction feature templates based on Gabor filters with different directions. Then, we convolve these templates with the palmprint image to calculate the direction responses of twelve directions. Finally, we encode the direction with the maximum direction response and the maximum direction response interval into the direction feature code.
Figure 4. The basic procedure of direction feature code calculation of a palmprint image. We first define twelve direction feature templates based on Gabor filters with different directions. Then, we convolve these templates with the palmprint image to calculate the direction responses of twelve directions. Finally, we encode the direction with the maximum direction response and the maximum direction response interval into the direction feature code.
Sensors 21 04896 g004
Figure 5. The typical palmprint image samples selected from the CASIA, IITD and GPDS databases, respectively, corresponding to the first to third lines.
Figure 5. The typical palmprint image samples selected from the CASIA, IITD and GPDS databases, respectively, corresponding to the first to third lines.
Sensors 21 04896 g005
Figure 6. The ROCs of different methods on the (a) CASIA, (b) IITD, and (c) GPDS databases.
Figure 6. The ROCs of different methods on the (a) CASIA, (b) IITD, and (c) GPDS databases.
Sensors 21 04896 g006
Figure 7. The average identification accuracy rates of the proposed method versus different values of weighted parameters on the (a) CASIA, (b) IITD, and (c) GPDS databases.
Figure 7. The average identification accuracy rates of the proposed method versus different values of weighted parameters on the (a) CASIA, (b) IITD, and (c) GPDS databases.
Sensors 21 04896 g007
Table 1. The average rank-one identification accuracies of different methods on the CASIA, IITD, and GPDS databases.
Table 1. The average rank-one identification accuracies of different methods on the CASIA, IITD, and GPDS databases.
LBPLDPCompetitiveOrdinalHOLDoNEBOCVDRCCALDC TFD
CASIA48.3652.3955.2147.2683.0359.9960.5058.7986.1688.55
60.8363.4766.4967.6688.3774.2575.5570.2492.0394.35
71.2172.1279.4575.9292.4580.0382.83 78.59 93.65 95.55
72.30 72.65 79.27 73.26 94.87 80.37 84.06 81.45 94.64 96.88
IITD43.64 43.87 45.92 42.25 84.88 60.71 60.73 55.81 85.07 89.04
58.33 59.62 65.16 58.77 93.19 68.12 74.31 73.44 93.53 94.97
62.12 62.87 72.25 70.73 95.12 73.43 84.10 80.14 96.15 96.83
64.56 64.44 79.79 76.43 96.80 80.69 87.96 85.04 97.00 97.47
GPDS50.23 52.74 61.73 56.18 79.35 61.16 60.56 47.77 85.53 85.55
66.12 68.33 75.88 74.68 91.37 75.78 75.60 68.70 92.85 95.30
69.43 70.20 80.03 82.17 93.31 80.13 84.71 75.22 95.05 96.34
70.75 70.87 86.03 85.53 96.10 85.71 87.16 81.23 97.70 98.33
Table 2. The average time taken (in second) of different methods for extracting the features of a palmprint image.
Table 2. The average time taken (in second) of different methods for extracting the features of a palmprint image.
MethodsFeature Extraction Time Taken
Competitive0.0136
Ordinal0.0152
EBOCV0.0201
DoN0.0102
DRCC0.21
TFD0.0225
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, L.; Xu, Y.; Cui, Z.; Zuo, Y.; Zhao, S.; Fei, L. Triple-Type Feature Extraction for Palmprint Recognition. Sensors 2021, 21, 4896. https://doi.org/10.3390/s21144896

AMA Style

Wu L, Xu Y, Cui Z, Zuo Y, Zhao S, Fei L. Triple-Type Feature Extraction for Palmprint Recognition. Sensors. 2021; 21(14):4896. https://doi.org/10.3390/s21144896

Chicago/Turabian Style

Wu, Lian, Yong Xu, Zhongwei Cui, Yu Zuo, Shuping Zhao, and Lunke Fei. 2021. "Triple-Type Feature Extraction for Palmprint Recognition" Sensors 21, no. 14: 4896. https://doi.org/10.3390/s21144896

APA Style

Wu, L., Xu, Y., Cui, Z., Zuo, Y., Zhao, S., & Fei, L. (2021). Triple-Type Feature Extraction for Palmprint Recognition. Sensors, 21(14), 4896. https://doi.org/10.3390/s21144896

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop