# An efficient classification method based on principal component and sparse representation

- Lin Zhai
^{1}, - Shujun Fu
^{1}Email author, - Caiming Zhang
^{2, 3}, - Yunxian Liu
^{1}, - Lu Wang
^{4}, - Guohua Liu
^{5}and - Mingqiang Yang
^{6}

**Received: **9 January 2016

**Accepted: **5 June 2016

**Published: **22 June 2016

## Abstract

As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

### Keywords

Palmprint recognition Image classification Principal component analysis Sparse representation Subspace optimization## Background

As an important application in optical imaging, palmprint recognition devices screen individual status by extracting effective textures of human palm (Shu and Zhang 1998; Feng et al. 2015; Fei et al. 2016). Compared with other optical recognition methods, such as fingerprint, face and gait, palmprint recognition has many advantages including low price of capture devices, low offensive and fixed rich texture features, which make it become a research focus in optical imaging and perception (Shu and Zhang 1998; Kong et al. 2009; Zhang et al. 2012).

Key issues in palmprint recognition are feature extraction and classification. The subspace method is one of main methods for feature extraction, including principal component analysis (PCA), independent component analysis (ICA), linear discriminant analysis (LDA), and so on (Connie et al. 2003; Duda et al. 2012; Zabalza et al. 2014; Ford et al. 2015). Among these methods, the most classic algorithm is the principal component analysis (Belhumeur et al. 1997), where original image matrix is converted into an one-dimensional vector, and limited features are used as accurately as possible to represent original image. However, its disadvantage is that the process of image matrix being converted into one-dimensional vector will cause problems like loss of spatial information. On the basis of this, the two-dimensional principal component analysis (2DPCA) method proposed by Yang et al. (2004) overcomes this defect well, which operates the image matrix directly instead of converting it in advance, but the dimension of feature vector is still high. Zhang and Zhou (2005) proposed the bi-directional two-dimensional principal component analysis (\(\hbox {(2D)}^{2} \hbox {PCA}\)) method, which extracts features from both row and column respectively, reducing the correlation between row and column, and also reducing the dimension of image feature matrix. As a result, the recognition rate is much improved (Pan and Ruan 2008).

The traditional signal sampling must follow the Nyquist sampling theorem: in order to reconstruct an analog signal without distortion, the sampling frequency should not be less than two times of the highest frequency of the signal spectrum (Gonzalez and Woods 2004). Compressed sensing (CS) theory was proposed by Donoho (2006), Candès and Wakin (2008), which has broken the restriction of the traditional Nyquist sampling theorem, and has brought a revolutionary change to the field of signal processing. In compressed sensing one can acquire the discrete samples of signal, where the sampling rate is far less than that of the Nyquist sampling, while still ensuring no distortion in the reconstructed signal under certain conditions. If observation matrix and sparse signal are known, sparse representation of original signal will be obtained. This sparse representation can be thought of as a compressed coding of the original signal, and the coded signal can be the basis for classification in the context of palmprint recognition (Wright et al. 2010). Sparse representation based classification (SRC) has been widely applied to the field of biological feature recognition (Wright et al. 2009; Yin et al. 2016; Feng et al. 2015). In this framework, however, the computational complexity of the reconstruction by the L1 norm minimization is very high, consequently a lot of time and space resources are needed in the process of numerical solution.

In view of the influence on palmprint recognition rate of unfavorable factors such as palm position, illumination, capture devices, etc, and the high computational complexity of traditional sparse classification methods, this paper presents fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse representation-based classification method. In this method, a palmprint image is first divided into equal blocks, which can make image information more fully utilized; then, \(\hbox {(2D)}^{2} \hbox {PCA}\) is used for each block to reduce dimension and to build an overcomplete dictionary; finally, a special subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation to obtain a final classification. The palmprint image is preprocessed before palmprint recognition, which can largely solve the above problem of the unfavorable factors. In addition, the method of the \(\hbox {(2D)}^{2} \hbox {PCA}\) with image blocking used in the recognition stage, still can partly overcome the difficulties: \(\hbox {(2D)}^{2} \hbox {PCA}\) can better extract image information from both rows and columns, which reflects more accurate features of image than PCA, 2DPCA and the random projection method; image blocking has good adaptability to the changes in posture and illumination (Gottumukkal and Asari 2004). It is explained as follows. Most dimensionality reduction based palmprint recognition measure the global information of each palmprint image and express them with a set of weights (feature), so they are not very effective in the case of changing position and illumination. Weight vectors will be greatly affected by the conditions from the weight vectors of the image with normal position and illumination, therefore it is hard to identify them accurately. If the palmprint image is divided into smaller blocks and the weight vector of each block is calculated, the local information of the palmprint can be well represented by the weights. Once the position or the illumination changes, just some of palmprint blocks will vary and the rest of the blocks will remain the same as the normal palmprint blocks, so the category can still be determined accurately by the remaining blocks.

We organize the remaining part of this paper as follows. In the “Principal component and sparse representation” section, principal component analyse with \(\hbox {(2D)}^{2} \hbox {PCA}\), sparse representation for compressed sensing and sparse classification are reviewed respectively. In the “Our method” section, feature extraction by blockwise bi-directional two-dimensional principal component analysis and our grouping sparse classification are described in detail. In the “Experimental results and analysis” section, experiments on data of a special palmprint database are implemented to verify advantages of our proposed algorithm. We give the conclusions of this paper in the “Conclusions” section.

## Principal component and sparse representation

### \(\hbox {(2D)}^{2} \hbox {PCA}\)

In the \(\hbox {(2D)}^{2} \hbox {PCA}\) method feature matrix is extracted from two directions of both row and column of image respectively, which can reduce the correlation between them and the dimension of the feature matrix of image.

*N*is the number of image sample category, there are \(n_i\) images with size \(l\times h\) in the

*i*th category, and the total number of samples is \(n=\sum _{i=1}^N n_i\). For a learned projection matrix \(X (h\times t)\) obtained from row direction, and a learned projection matrix \(Z (l\times s)\) obtained from column direction from a set of training images, we project the original image \(A (l\times h)\) onto

*X*and

*Z*in sequence, thus generating a \(s\times t\) dimensional coefficient (feature) matrix \(C=Z^TAX\). Conversely, a reconstructed image \(\hat{A}\) can be obtained by using the coefficient matrix for image reconstruction (Zhang and Zhou 2005):

### Sparse representation for compressed sensing

The compressed sensing theory tells us that, if a signal is sparse or compressible through an orthogonal transformation, the signal can be observed in a lower frequency, and can be represented with least numbers of observation values. Moreover, the original signal can be estimated well by these sparse observation values (Donoho 2006; Candès and Wakin 2008).

*x*is an original signal with length

*n*,

*y*is an observed signal with length

*m*, and \(D (m\times n, m\ll n)\) is a measurement matrix satisfying

*x*can be sparsely reconstructed from

*y*through solving the following L0 norm optimal problem (Elad 2010):

### Sparse classification

A supervised classification is to use labeled training samples from special object categories to correctly determine the category to which a new test sample belongs. The basic idea of sparse representation-based classification (SRC) method is as follows. Assuming that a test sample can be linearly represented by training samples obtained from its same category, one assembles an overcomplete dictionary composed of all training samples from object categories, and lets a test sample project on it. Because the test sample only has bigger coefficients corresponding to some sample category in the aforementioned dictionary representation, the test sample is usually sparse in the representation of the overcomplete dictionary. According to this sparse representation the test sample can be correctly classified.

*N*categories of gray palmprint image with size \(l\times h\), and there are \(n_i\) training samples in the

*i*th category. If each image is converted into a column vector \(\nu \in R^m (m=l\times h)\), the image

*y*to be identified in the

*i*th category can be expressed as (Wright et al. 2009):

*i*th category, respectively.

*N*categories \(D=[D_1,D_2,\ldots ,D_N]\in R^{m\times n}\) with the sample dimension

*m*and the number of total samples \(n (= \sum _{i=1}^N n_i)\), where \(D_i=[\nu _{i,1},\nu _{i,2},\ldots ,\nu _{i,n_i}]\) is a group of the training samples in the

*i*th category. Any sample

*y*to be identified can be linearly represented through

*D*:

Ideally, for an image *y* belonging to the *i*th category, there should be a corresponding vector \(x=[0,0,\ldots 0,\alpha _{i,1}, \alpha _{i,2},\ldots ,\alpha _{i,n_i},0,0,\ldots 0,]^T\in R^n\) according to the Eq. (5). Moreover, if the total number of samples is greatly larger than the number of samples in each category: \(n\gg max(n_i)\), the proportion of nonzero elements in \(x \, n_i/n\) will be much smaller. The greater the difference between *n* and \(max(n_i)\), the sparser *x* is, and the more favorable it is to the sparsity classification.

*e*is an error vector representing changes of position and illumination of testing samples.

From the above discussion, however, if the overcomplete dictionary *D* consists of all training samples, it can be expected that any testing sample is sparse in *D* in any case, and is only similar to the elements of *D* belonging to the same category.

## Our method

### Blockwise bi-directional two-dimensional principal component analysis

The dimension of a palmprint image is usually very high after being converted into one-dimensional vector. In order to solve *y* in the Eq. (5), one needs to solve a high-dimensional linear equations, which is very difficult in actual applications. In this paper, we propose a blockwise bi-directional two-dimensional principal component analysis to reduce the dimension of image. To be specific, image blocking and the \(\hbox {(2D)}^{2} \hbox {PCA}\) method are combined to extract palmprint image features, which divides the image to be identified into some subimages, and then identifies the subimages with \(\hbox {(2D)}^{2} \hbox {PCA}\). Because changes of position and illumination only influence a few subimages, and do not influence all subimages, this method can effectively overcome negative effects of position and illumination changing in traditional PCA algorithms. Finally, the reduced and normalized feature matrixes are converted into column vectors to assemble an overcomplete dictionary for the sparsity classification.

The proposed method can effectively reduce the computational complexity. For each subimage block from a palmprint image, its size is \(l_1\times h_1\). If one reserves \(p (p<\{l_1, h_1\})\) eigenvalues in the PCA transformation, then the size of a subimage with dimensionality reduction by \(\hbox {(2D)}^{2} \hbox {PCA}\), is \(p\times p\); the size of a subimage by 2DPCA is \(l_1\times p\); although the size by PCA is \(p\times 1\), one need to convert the image into one-dimensional matrix before image projection, which means the size of each subimage is \(l_1h_1\times 1\). As one can see, the \(\hbox {(2D)}^{2} \hbox {PCA}\) consums the least computer memory. Thus, \(\hbox {(2D)}^{2} \hbox {PCA}\) used for dimensionality reduction can effectively reduce the computational complexity compared with PCA and 2DPCA.

### Grouping sparse classification

In order to speed up the process of sparsely solving of the Eq. (5), we employ a subset technique [(subspace orthogonal matching pursuit (SOMP)] to solve it in each training category, which will effectively reduce the computational cost by canceling a great number of columns of *D*.

*N*of training samples, our arithmetic steps are as follows:

- 1.The training sample matrix is divided into blocks of equal size to form subimage sets, which are composed of subimages in the same positions after being divided; dimension reduction and normalization with \(\hbox {(2D)}^{2} \hbox {PCA}\) are used for each subimage; an overcomplete dictionary \(D'\) is formed, and according to the sample category, \(D'\) is divided into
*N*training submatrixes:That is \(D'=[D'_1,D'_2,\ldots ,D'_N]\).$$\begin{aligned} D'_1,D'_2,\ldots D'_N. \end{aligned}$$(9) - 2.
For each test sample \(y, y=D'_ix_i,i=1,2,\ldots , N\), is solved by the orthogonal matching pursuit algorithm, and the coefficient vector \([x_1,x_2,\ldots ,x_N]\) is obtained.

- 3.The recognition coefficient \(x_i\) of each category is related to the testing sample to calculate the residual:$$\begin{aligned} r_i=\Vert y-D'_ix_i\Vert _2,\quad i=1,2,\ldots ,N. \end{aligned}$$(10)
- 4.Finally, a recognition result is outputted:where, the category of the minimum reconstruction error is just that of the test data$$\begin{aligned} Identity(y)=argmin(r_i),\quad i=1,2,\ldots ,N, \end{aligned}$$(11)
*y*.

The method SOMP is used to greatly reduce the size of the calculated sample, which makes it easy to calculate sparse coefficients and error results in the comparison, and reduces the number of the loop count accordingly, improving the precision level than the calculation of samples with larger size. More important, this method can significantly improve the recognition result.

## Experimental results and analysis

In order to verify advantages of the proposed method using the dimension reduction with \(\hbox {B(2D)}^{2} \hbox {PCA}\) and the subspace orthogonal matching pursuit (SOMP), three experiments are carried out according to different dimension reductions, sparse classifications and image blockings.

### Experiment 1: recognition rates with different dimension reduction methods

Different dimension reduction methods are compared. Random projection: test image is converted into a column vector, and is projected onto a random Gaussian matrix, and then feature vectors are obtained. PCA: test image is also converted into a column vector and is reduced in the dimension. \(\hbox {(2D)}^{2} \hbox {PCA}\): the dimension of test image is reduced by the bi-directional projection, and then the feature matrix is obtained. \(\hbox {B(2D)}^{2} \hbox {PCA}\): test image is first divided into 16 subimages with the blocking number \(4\times 4\), and then \(\hbox {(2D)}^{2} \hbox {PCA}\) is used to reduce the dimension of each subimage to get the feature matrix. In the final step, the subspace orthogonal matching pursuit (SOMP) is used for all of above feature data to complete corresponding sparse classifications.

Optimal recognition rates (%) and corresponding feature dimensions with different dimension reduction methods

Dimension reduction method | Feature dimension | Optimal recognition rate |
---|---|---|

Random projection | 225 | 94.8 |

PCA | 100 | 95.6 |

\(\hbox {(2D)}^{2} \hbox {PCA}\) | 196 | 96.4 |

\(\hbox {B(2D)}^{2} \hbox {PCA}\) | 49 | 97.2 |

### Experiment 2: recognition rates with different classification methods

The \(\hbox {B(2D)}^{2} \hbox {PCA}\) algorithm is first used to reduce the dimension of test image to get the feature vector, followed by a normalizing processing; then, both OMP and SOMP methods are used for sparse classification and recognition.

### Experiment 3: recognition rates with different blocking numbers

Here, we discuss the effect of feature dimension on the recognition, where different blocking numbers: \(1\times 1\) (without dividing), \(2\times 2, 2\times 4, 4\times 4\) and \(8\times 8\) are compared. SOMP is used in the sparse classification.

In Fig. 5, recognition rates in different blocking numbers are shown. It can be seen that, when the feature dimension is low, the recognition rate with the dividing technique is higher than that of non-dividing in same conditions; when the feature size is 7, the sparse classification with \(4\times 4\) blocking reaches the highest recognition rate of 97.2 %, which shows that the combination of blocking method and \(\hbox {(2D)}^{2} \hbox {PCA}\) is much effective. The reason for this advantage can be explained as follows: the subimage is smaller in size after image dividing, and one can extract more feature details which is conducive to palmprint recognition; at the same time, when the palmprint image is disturbed by position, illumination and other external factors, only a small portion of subimages will be influenced, other subimages will not be done.

In Fig. 5, it also can be seen that, when the blocking number is too small, each subimage will be bigger in size, as a result this method can not display the strength of being insensitive to position and illumination changes; when the blocking number is too big, on the contrary, each subimage will be much smaller, which will damage the global structure information of the palmprint image, resulting in the decreasing of recognition rate. Therefore, one should select appropriate blocking number and feature dimension based on different degradations of palmprint image when utilizing \(\hbox {B(2D)}^{2} \hbox {PCA}\) to extract feature vectors.

## Conclusions

In this paper an efficient grouping sparse classification is proposed with dimension reduction using robust blockwise principal components as feature vectors, which greatly reduces the feature dimension and overcomes interferences from unfavorable external factors, obtaining better recognition results in the process of palmprint recognition. Obvious advantages in both recognition rate and reduction of feature dimension are verified in experiments on special palmprint data in the comparison of some related methods. In the case of noise corruption palmprint enhancement for more effective dictionary using fringe filtering (Fu and Zhang 2012) is one of ongoing work in grouping sparse classification for degraded palmprint images.

## Declarations

### Authors' contributions

LZ and SF carried out the research work and drafted the manuscript. CZ, YL, LW, GL and MY made some revisions of the manuscript. All authors read and approved the final manuscript.

### Acknowledgements

The research has been supported in part by the National Natural Science Foundation of China (61272239, 61070094, 61020106001), the NSFC Joint Fund with Guangdong (U1201258), the Science and Technology Development Project of Shandong Province of China (2014GGX101024), and the Fundamental Research Funds of Shandong University (2014JC012). We also thank Prof. Xin Pan (Inner Mongolia Agricultural University) and Prof. Qiuqi Ruan (Beijing Jiaotong University) for their valuable input on the manuscript.

### Competing interests

The authors declare that they have no competing interests.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Belhumeur PN, Hespanha JP, Kriegman D (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720View ArticleGoogle Scholar
- Candès EJ, Wakin MB (2008) An introduction to compressive sampling. IEEE Signal Process Mag 25(2):21–30View ArticleGoogle Scholar
- Connie T, Teoh A, Goh M, Ngo D (2003) Palmprint recognition with PCA and ICA. In: Proceedings of image and vision computing, New Zealand, pp 227–232Google Scholar
- Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306View ArticleGoogle Scholar
- Duda RO, Hart PE, Stork DG (2012) Pattern classification. Wiley, HobokenGoogle Scholar
- Elad M (2010) Sparse and redundant representations: from theory to applications in signal and image processing. Springer, New YorkView ArticleGoogle Scholar
- Fei L, Xu Y, Tang W, Zhang D (2016) Double-orientation code and nonlinear matching scheme for palmprint recognition. Pattern Recognit 49:89–101View ArticleGoogle Scholar
- Feng Q, Zhu X, Pan J (2015) Global linear regression coefficient classifier for recognition. Opt Int J Light Electron Opt 126(21):3234–3239View ArticleGoogle Scholar
- Feng J, Wang H, Li Y, Liu F (2015) Palmprint feature extraction method based on rotation-invariance. Biom Recognit 215–223Google Scholar
- Ford ME, Wei W, Moore LA, Burshell DR, Cannady K, Mack F, Ezerioha N, Ercole K, Garrett-Mayer E (2015) Evaluating the reliability of the Attitudes to Randomized Trial Questionnaire (ARTQ) in a predominantly African American sample. Springerplus 4(1):1–10View ArticleGoogle Scholar
- Fu S, Zhang C (2012) Fringe pattern denoising via image decomposition. Opt Lett 37(3):422–424View ArticleGoogle Scholar
- Gonzalez RC, Woods RE (2004) Digital image processing, 2nd edn. Prentice Hall, New YorkGoogle Scholar
- Gottumukkal R, Asari VK (2004) An improved face recognition technique based on modular PCA approach. Pattern Recognit Lett 25(4):429–436View ArticleGoogle Scholar
- Kong A, Zhang D, Kamel M (2009) A survey of palmprint recognition. Pattern Recognit 42(7):1408–1418View ArticleGoogle Scholar
- Pan X, Ruan Q (2008) Palmprint recognition using gabor feature-based \(\text{(2D) }^{2}\text{ PCA }\). Neurocomputing 71(13):3032–3036View ArticleGoogle Scholar
- Pan X, Ruan Q (2009) Palmprint recognition using Gabor-based local invariant features. Neurocomputing 72(7–9):2040–2045View ArticleGoogle Scholar
- Shu W, Zhang D (1998) Automated personal identification by palmprint. Opt Eng 37(8):2359–2362View ArticleGoogle Scholar
- Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666View ArticleGoogle Scholar
- Wright J, Yang A, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227View ArticleGoogle Scholar
- Wright J, Ma Y, Mairal J, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6):1031–1044View ArticleGoogle Scholar
- Yang J, Zhang D, Frangi AF, Yang J (2004) Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans Pattern Anal Mach Intell 26(1):131–137View ArticleGoogle Scholar
- Yin J, Zeng W, Wei L (2016) Optimal feature extraction methods for classification methods and their applications to biometric recognition. Knowl Based Syst 99:112–122View ArticleGoogle Scholar
- Zabalza J, Ren J, Yang M, Zhang Y, Wang J, Marshall S, Han J (2014) Novel folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. ISPRS J Photogramm Remote Sens 93:112–122View ArticleGoogle Scholar
- Zhang D, Zuo W, Yue F (2012) A comparative study of palmprint recognition algorithms. ACM Comput Surv 44(1):2–137View ArticleGoogle Scholar
- Zhang D, Zhou Z (2005) \(\text{(2D) }^{2}\text{ PCA }\): Two-directional two-dimensional PCA for efficient face representation and recognition. Neurocomputing 69(1):224–231View ArticleGoogle Scholar