Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental evaluation of fingerprint verification system based on double random phase encoding

Open Access Open Access

Abstract

We proposed a smart card holder authentication system that combines fingerprint verification with PIN verification by applying a double random phase encoding scheme. In this system, the probability of accurate verification of an authorized individual reduces when the fingerprint is shifted significantly. In this paper, a review of the proposed system is presented and preprocessing for improving the false rejection rate is proposed. In the proposed method, the position difference between two fingerprint images is estimated by using an optimized template for core detection. When the estimated difference exceeds the permissible level, the user inputs the fingerprint again. The effectiveness of the proposed method is confirmed by a computational experiment; its results show that the false rejection rate is improved.

©2006 Optical Society of America

1. Introduction

Cipher-based authentication using smart cards is a very effective personal authentication method for various services on the Internet. A smart card can guarantee the validity of the secret key stored in it since the read out of key data is prevented by the access control function and encryption and decryption are carried out by its own embedded IC. Hence, it is possible to achieve reliable cipher-based authentication, such as challenge-response authentication. Further, personal identification number (PIN) verification is usually employed to determine whether a person in possession of a smart card is the authorized cardholder. The drawback of PIN verification is that a long and random PIN code, which is difficult to memorize, should be used to avoid PIN cracking. On the other hand, biometrics is utilized for personal identification; it studies particular physical features of individuals. Biometric authentication is expected to become a convenient authentication scheme since it does not require the memorization of any authentication data or storage of information for verifying an authorized individual. In addition, it is considered to be highly reliable since it is difficult to masquerade or forge biometric data. However, biometric authentication is not suitable for personal identification over the networks because the biometric data is very large and therefore it cannot be transmitted quickly through the networks. Moreover, it is highly probable that the biometric data will leak when it is transmitted to the enrollment server. An effective solution to personal authentication over the networks is the combination of smart card authentication and biometrics, e.g., the application of fingerprint verification to smart card holder authentication. In this case, an ingenious method is required to implement fingerprint verification in the embedded IC chip [1].

On the other hand, optical technology for security has been developed, i.e. hologram attached on bill for anti-counterfeit. Recently, various studies on optical information processing for information security have been conducted; they include studies on face recognition by optical pattern matching [2–3]. Optical processing is appropriate for not only massive parallel processing but also image processing because images captured by sensors and observed by human eyes essentially represent optical information. An optical encryption technique termed double random phase encoding, which involves image encryption by optical phase modulation, has been developed as an optical technology [4–5]. We apply it to biometric authentication in order to enable the use of biometrics as cipher keys.

In ref [6] and [7], we proposed a smart card authentication system, which combines fingerprint verification with conventional PIN verification, by applying double random phase encoding. A drawback of this system is that when the position difference between the enrollment and verification fingerprints is very large, the probability of accurate verification of an individual decreases remarkably. Therefore, in order to eliminate the influence of fingerprints that are shifted significantly, we present a method in which preprocessing is performed to separate the valid and invalid fingerprints. In addition, we conduct experiments using an electronic computer in order to evaluate the performance of the improved system.

2. Review of hybrid authentication system that combines fingerprint verification with PIN verification

2.1 Smart card holder authentication method

In the proposed method shown in Fig. 1, the PIN data encoded as a 2D image is optically encrypted using the secret key generated from the user’s fingerprint image. The encrypted PIN data is stored in the hologram attached to the card surface. During the cardholder authentication process, the encrypted data is optically decrypted using the key generated from the cardholder’s fingerprint. The correct PIN data is restored only when the cardholder’s fingerprint is presented; subsequently, the correct PIN is input into the smart card, thereby making it usable.

This system is highly secure because the PIN data stored on the card surface is encrypted and the correct key is generated only from the cardholder’s fingerprint. PIN verification is based on the fundamental function of the smart card, which ensures a reliable operation. It is also a very convenient system because it does not require the users to memorize the PIN or some other information. Moreover, fingerprint verification can be implemented without changing the specification of the IC chip embedded in the smart card.

 figure: Fig. 1.

Fig. 1. Smart card holder authentication system which combines fingerprint verification with PIN verification by applying a double random phase encoding.

Download Full Size | PDF

2.2 Optical image encryption and pattern matching

In double random phase encoding, it is possible to restore the plane image successfully when the keys in a pair are not identical. This method can not only hide data but also perform pattern matching between two images. In other words, the similarity between two images is linked with the restoration accuracy of the decrypted image. Therefore, double random phase encoding is appropriate for the cipher method, which uses the fingerprint image as a cipher key.

The process of optical image encryption by double random phase encoding is shown in Fig. 2. Let the complex amplitude of an original image, which is a 2D encoded PIN, be f(x,y). Then, f(x,y) is multiplied by a phase mask with a random image of R(x,y), and the phase modulated image fm (x,y) is expressed as follows:

fm(x,y)=f(x,y)exp[jR(x,y)].

Further, its Fourier transform Fm (u,v) is multiplied by another random phase mask that corresponds to an encryption key image exp[jKE (u,v)]. This image is generated from a fingerprint image. Subsequently, the encryption image Fm (u,v)exp[jKE (u,v)] is obtained, and it is recorded as enrollment information. During decryption, the complex conjugate of the encrypted image Fm*(u, v)exp[- jKE (u,v)] is multiplied by a phase mask that corresponds to a decryption key image exp[jKD (u,v)] . Therefore, a complex amplitude image Fm* (u,v)exp[j{- KE (u,v)+KD (u,v)}] is obtained. Its Fourier transform produces the decrypted image fr (xd ,yd ) as follows:

fr(xd,yd)=Fm*(u,v)N(u,v)
=fm*(xd,yd)*n(xd,yd),

where N(u,v)=exp[j{-KE (u,v)+KD (u,v)}] , n(xd ,yd )= ℑ[N(u,v)] , and ℑ[] denotes the Fourier transform operator. Let us consider the following two images:

g1(x,y)=1[exp{jKE(u,v)}],
g2(x,y)=1[exp{jKD(u,v)}].

Then, it can be considered that n(xd ,yd ) represents the correlation between g 1(x′,y′) and g 2(x′,y′). This implies that the decrypted image in double random phase encoding is the convolution of the plane image and the correlation between two images.

 figure: Fig. 2.

Fig. 2. Flow of double random phase encoding.

Download Full Size | PDF

On the other hand, when the amplitude component in the Fourier domain is constant, the correlation is termed phase-only correlation (POC) [8]. It is well known that the POC between two identical fingerprints exhibits a sharp peak while that between two different fingerprints exhibits a random pattern [9] (Fig. 3). Therefore, in the proposed method, let n(xd ,yd ) be the POC of the two fingerprint images. In other words, the phase components of the Fourier-transformed fingerprint images are used as keys in double random phase encoding; the two fingerprint images gE (x′,y′) and gD (x′,y′) are Fourier transformed as follows:

using an electronic computer in order to evaluate the performance of the improved system.

GE(u,v)=[gE(x,y)]
=AE(u,v)exp{jPE(u,v)},
GD(u,v)=[gD(x,y)]
=AD(u,v)exp{jPD(u,v)},

where AE (u,v) and AD (u,v) are the amplitude components and exp[jPE (u,v)] and exp[jPD (u,v)] are the phase components. Then, the phase components are used as keys as follows:

KE(u,v)=PE(u,v),
KD(u,v)=PD(u,v).

In this case, n(xd ,yd ) approximately satisfies the following relation:

n(xd,yd){δ(xdα,ydβ)(correct fingerprint)random sequence(incorrect fingerprint),

where δ() denotes the Dirac delta function and α and β represent the shift of the fingerprint. When a correct fingerprint is used, the restored image fr (xd ,yd ) is expressed as fm*(xd -α,yd -β) with the same intensity pattern as that of f(x,y). When an incorrect fingerprint is used, fr (xd ,yd ) produces a random image, which is the convolution of fm (x,y) and a random sequence.

Even when the positions of the two fingerprints are different, the correct PIN image can be restored because the POC is a shift-invariant operation. However, the rotation of fingerprints is unacceptable [10,11] since the POC is very sensitive to rotation. In order to solve this problem, we generate some fingerprint images that are rotated slightly and verify them against the enrollment fingerprint. Then, the best decrypted image is selected according to the quality of the restored PIN pattern. The technique for selecting an appropriate decrypted image is described in section 2.3.

 figure: Fig. 3.

Fig. 3. The POC waveforms are shown. The left waveform is obtained by calculating the POC between same fingerprints, and the right one is the POC between different fingerprints.

Download Full Size | PDF

2.3 PIN encoding into 2D image

Since double random phase encoding is employed for 2D image data encryption, the binary PIN data must be encoded into a suitable 2D image. Conventional 2D encoding techniques include QR code [12] and Maxi code developed by UPS; they can quickly and precisely determine the matrix position when the matrix pattern is shifted and rotated. Tanida and Ichioka have proposed a 2D bit coding method for optical bit operations [13]. Our proposed system should be adapted only to vertical and horizontal shifts and not to rotation because the restored PIN pattern in the decrypted image is not rotated but shifted depending on the position of two fingerprints. In addition, the integrated intensity of the encoded image should be maintained constant irrespective of the PIN. This is because the integrated intensity that varies depending on the type of PIN may serve as a clue for intruders to exploit the system.

The encoding method employed in ref. [6] and [7] does not satisfy the above requirement; in this method, binary ones and zeros are represented by white and black squares respectively. Moreover, during the decoding of the decrypted image into the PIN data, it is difficult to establish an appropriate threshold level to distinguish between the bit values stably because it varies depending on the type of PIN. In this study, the binary PIN data is encoded into a 2D bit pattern image, as shown in Fig. 4(a). Two squares are used to represent a bit; the bit is zero when the right square is white, and it is one when the left square is white. In order to determine the shifted position of the bit pattern, tags are installed as position sensors at the upper left and lower right corners of the bit pattern. The exact position of the restored bit pattern is determined by calculating the correlation between the decrypted image and the tags. These tags are also used to select the most appropriate decrypted image from the images decrypted by the rotated fingerprint images.

Fig. 4(b) shows an image translated from the character code sequence “1234ABCD.” The PIN is decoded from the decrypted image according to the following procedure (illustrated in Fig. 5). First, the bit pattern is extracted from the decrypted image by using the tags. Next, the intensities of the right and left squares of each bit are compared. If the intensity of the right square is larger, the bit value is considered as zero. Similarly, the binary value of each bit is determined. Finally, the binary PIN data is restored. This technique can stably transform the PIN data and 2D image with constant integrity intensity of the encoded image and without the use of a threshold level to distinguish between the bit values.

 figure: Fig. 4.

Fig. 4. (a) The method of bit encoding into 2D image is shown. When the right square is white, these squares express zero, and when the left one is white, they express one. (b) An example image which is transformed from a character sequence “1234ABCD”. Tags are also installed at the upper left corner and the lower right corner of the bit pattern.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Procedure of decoding a PIN data from the decrypted image.

Download Full Size | PDF

3. Improvement in the proposed system

3.1 Preprocessing for fingerprints that are shifted significantly

In this section, the method to implement the proposed verification system is presented. It is based on image correlation; therefore, the decrypted image will be restored correctly even when fingers move vertically and horizontally on the fingerprint sensor. However, the result of the electronic computer experiment in ref. [6] and [7] shows that the probability of accurate verification tends to reduce when the verification fingerprint is shifted significantly in comparison with the enrollment fingerprint. This problem is probably caused by the difference in the scope area captured by the fingerprint sensor. When the false rejection rate (FRR) is very high, the smart card may be unavailable for use because it locks itself when PIN verification fails several times repeatedly.

Therefore, we exploit preprocessing in which the distance between the fingerprints for enrollment and verification is estimated and the validity of the fingerprints for verification is determined based on the estimated distance. When it is determined that the fingerprint is shifted significantly, the user inputs it again.

3.2 Estimation of distance between two fingerprints

In our system, the fingerprint image for enrollment cannot be compared directly with the verification image. Therefore, the distance between two fingerprints must be estimated without comparing their images. However, it is not necessary to determine the exact distance because a fingerprint shifted slightly is acceptable. The system requires a fast and simple operation when the estimated distance is accurate to a certain extent.

In order to satisfy these requirements, the core, which is the center point of the fingerprint pattern, is used as a datum point. We determine the cores of both the enrollment and verification fingerprints and calculate the difference in their positions. In order to determine the core position, we calculate the image correlation between the fingerprint image and a template image, which is suitably designed. The method for generating the template is described in subsection 3.3.

We consider an implementation method by assuming that the coordinates of the enrollment fingerprint core are registered on the card surface in a suitable form, e.g., bar code, and they are read out when the shifted distance is estimated during verification. The workflow of this implementation method is illustrated in Fig. 6.

 figure: Fig. 6.

Fig. 6. Workflow of the preprocessing.

Download Full Size | PDF

3.3. Generation of template for core detection

We describe a method to generate the template that can precisely detect the core from any type of fingerprint. Although each fingerprint exhibits a different pattern, all fingerprints exhibit some arcs on their upper sides; a fingerprint pattern comprises many arcs with different curvature radii. Therefore, an image with an arc line is applied to the template (Fig. 7). The correlation peak between the fingerprint and the template appears at the position where the arc of the template fits a fingerprint ridge with the closest curvature radius. The coordinates of the correlation peak are considered to represent the approximate position of the core. We can determine the distance between two fingerprints without comparing them directly; their estimated position difference (xe ,ye ) is expressed as follows:

(xe,ye)=(x1x2,y1y2),

where (x 1,y 1) denote the coordinates of the correlation peak between the enrollment fingerprint and the template and (x 2,y 2) denote the coordinates of the correlation peak between the verification fingerprint and the template.

In the proposed method, the model of the arc of the template is a symmetrical length-limited curve, which is a part of an ellipse. As shown in Fig. 7, this arc is defined by four parameters-major axis a, minor axis b , angular range of arc θ, and line thickness w. These parameters are obtained by using some training fingerprint images. In order to determine the best parameter configuration, an evaluation function ∆E is defined as the average error between the actual position difference and the estimated position difference of two fingerprint images. It is expressed as follows:

ΔE=average(xaxe)2+(yaye)2all training image pairs,

where (xa ,ya ) denote the actual position difference of two fingerprint images. The training sequence is summarized as follows:

  1. Set the parameters.
  2. Generate a pair of training fingerprint images captured from the same fingerprint and calculate their actual position difference (xa ,ya ) by correlation.
  3. Calculate the core positions of two training fingerprint images (x 1,y 1) and (x 2,y 2) by the correlation between each training image and the template image; subsequently, determine their estimated difference (xe ,ye ).
  4. Repeat steps 2 and 3 for all fingerprint image pairs of the same individual.
  5. Calculate the energy function ∆E.
  6. Update ∆E if its current value is less than its previous value; memorize the parameter configuration.
  7. Repeat steps 1 to 6 until all parameter configurations are tested.
  8. Finally, select the best parameter configuration with the smallest value of ∆E.
 figure: Fig. 7.

Fig. 7. This figure shows the template image which is a part of an ellipse. The parameters which represent the characteristic of the arc are also shown.

Download Full Size | PDF

4. Experiments

4.1 Template generation

In order to generate the template, 64 fingerprint images of eight individuals are obtained to form a training set. These images are captured using a capacitive fingerprint sensor LSI manufactured by NTT Electronics Corporation. The sensor produces each image with a size of 224×256 pixels in grayscale with 8 bits per pixel; however, the size is increased to 256×256 pixels by adding zero-value pixels in order to perform a fast Fourier transform. One of the fingerprint images of each individual is used for enrollment, and the others are used for verification. The optimized template image is shown in Fig. 8(a), in which the parameters are provided with a = 82, b = 86, θ = 57 [degree], and w = 3, and an image exhibiting the core is shown in Fig. 8(b). It is demonstrated that this template can determine the approximate position of the core even when the fingerprint image is shifted significantly. We evaluated the performance of the generated template image by applying it to another 50 test images of the eight individuals. The result revealed an average estimation error of 7.7 pixels.

 figure: Fig. 8.

Fig. 8. These images show the experimental result of template image generation, (a) optimized template image, (b) estimated cores, which are represented by black points.

Download Full Size | PDF

4.2 Verification

In order to evaluate the performance of the fingerprint verification system, we conduct experiments based on electronic computer processing. In these experiments, the PIN data 1234ABCD is encrypted and decrypted using keys transformed from fingerprint images and the PIN data restored from the decrypted images is evaluated. The experiments are conducted with and without the removal of fingerprints that are shifted significantly. In the experiment that involves the removal of these fingerprints, the validity of each fingerprint image is determined based on the estimated distances between the enrollment and verification fingerprints. Ten fingerprint images of five individuals are obtained; one of the images of each individual is used for enrollment (encryption), and the others are used for verification (decryption). As a criterion for evaluating the performance, the average bit error rate (BER) of the restored bit pattern, false rejection rate (FRR), and false acceptance rate (FAR) are calculated as follows:

BER=NErrorNPIN,
FRR=NFPNS,
FAR=NFNNS,

where NPIN is the bit number of the PIN (in this case, 64 bits); NError , average value of the error bit number of the decoded PIN; NFP , number of false positive trials; NFN , number of false negative trials; and Ns , total number of trials in the experiment. When the BER is equal to zero, the verification is considered to be a “success”; otherwise, it is considered to be a “failure.”

Examples of the resultant images are shown in Fig. 9. Fig. 9(f) shows an image decrypted using the fingerprint of the same individual; in this case, a bit pattern of the correct PIN is restored, although it is shifted depending on the shift of the input fingerprint. As shown in Fig. 9(i), an image decrypted using the fingerprint of a different individual produces a random pattern. Table 1 shows the verification accuracy of the two experiments conducted with (Exp. B) and without (Exp. A) the removal of fingerprints that are shifted significantly. In Exp. B, the fingerprint image to be used for verification is rejected when the distance between two fingerprints, which is estimated by preprocessing explained in section 3, is greater than 50 pixels. Fig. 10 shows the actual position difference between the enrollment and verification fingerprints along with the result of verification, i.e., success, failure, and exclusion. The FRR of Exp. B is improved by removing the fingerprints that are shifted significantly.

 figure: Fig. 9.

Fig. 9. Result images are shown, (a) fingerprint image for enrollment, (b) encryption key image generated from (a), (c) original PIN image, (d) same individual’s fingerprint image for verification, (e) decryption key image generated from (d), (f) decrypted image by using (e), (g) different individual’s fingerprint image for verification, (h) decryption key image generated from (g), (i) decrypted image by using (h).

Download Full Size | PDF

Tables Icon

Table 1. Accuracy of experimental verification

 figure: Fig. 10.

Fig. 10. The actual position differences between two fingerprints used in the experiment are plotted. These graphs also show the result of verification in the case of (a) without removing (b) with removing the significantly shifted fingerprints.

Download Full Size | PDF

4.3 Discussion

We discuss the effectiveness of the proposed method. The experimental result shown in Fig. 10 indicates that most of the fingerprints that are shifted significantly can be eliminated, thereby improving the FRR. This is because the distance between two fingerprints is estimated accurately. For the core location of the fingerprint to be estimated accurately, it is essential that the fingerprint image captured by a fingerprint sensor has an upward convex arc with the same angular range as the template. However, the fitted arc is not always captured by the fingerprint sensor, although all fingerprint images used for the experiments have a fitted arc; this could possibly hinder the verification accuracy. On the other hand, the individual number might not be sufficient for generating templates for any type of fingerprint. An optimal method of selecting the training set will be researched in the future. Preprocessing, which involves the correlation of two images and subtraction of two vectors, is a very fast and simple technique as observed in the implementation. Therefore, the proposed method is sufficiently effective for satisfying the requirement described in section 3. Further, it is more robust against noise and intermittent ridges than the conventional method in which the core is detected by counting the number of ridges [14].

The verification accuracy of the current experimental system is lower than that of the conventional fingerprint verification system and its level of security is insufficient. Therefore, it needs to be improved for practical application.

5. Conclusion

We propose preprocessing to improve the FRR in the fingerprint verification system based on optical image encryption. In the proposed method, the validity of fingerprints is determined based on the position difference between two fingerprints. This method is effective in improving the FRR, although the estimation accuracy of the position difference is not very high. Moreover, the influence of preprocessing on time consumption during verification is small since it is a fast and simple technique.

However, some problems of the proposed authentication system, which combines fingerprint verification with conventional PIN verification, should be solved for practical application. One is the improvement in verification accuracy, which is improved in this paper but can be improved further. The others are evaluation of security and implementation method, which have not been investigated sufficiently and are the main topics of our future research.

Acknowledgments

This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Exploratory Research, 2005, 16656115 (Japan).

References and links

1. S. Ishida, M. Mimura, and Y. Seto, “Development of Personal Authentication Techniques Using Fingerprint Matching Embedded in Smart Cards,” IEICE Trans. Inf. & Syst. E84-D, 812–818 (2001).

2. E. Watanabe and K. Kodate, “Implementation of a high-speed face recognition system that uses an optical parallel correlator,” Appl. Opt. 44, 666–676 (2005). [CrossRef]   [PubMed]  

3. B. Javidi and J.L. Horner, “Optical pattern recognition for validation and security verification,” Opt. Eng. 33, 1752–1756 (1994). [CrossRef]  

4. P. Refregier and B. Javidi, “Optical encryption based on input plane Fourier plane random encoding,” Opt. Lett. 20, 767–769 (1995). [CrossRef]   [PubMed]  

5. Bor Wang, Ching-Cherng Sun, Wei-Chia Su, and Arthur E. T. Chiou, “Shift-Tolerance Property of an Optical Double-Random Phase-Encoding Encryption System,” Appl. Opt. 39, 4788–4793 (2000). [CrossRef]  

6. H. Suzuki, T. Yamaya, T. Obi, M. Yamaguchi, and N. Ohyama, “Fingerprint verification for smart-card holders based on optical image encryption scheme,” in Optical Information Systems, Bahram Javidi and Demetri Psaltis, eds., Proc. SPIE 5202, 88–96 (2003). [CrossRef]  

7. H. Suzuki, T. Yamaya, T. Obi, M. Yamaguchi, and N. Ohyama, “Fingerprint verification for smart card holders identification based on optical image encryption,” Jpn. J. Opt. 33, 37–44 (2004) in Japanese.

8. J.L. Horner and P.D. Gianino, “Phase-only matched filtering,” Appl. Opt. 23, 812–816 (1984) [CrossRef]   [PubMed]  

9. K. Ito, et al., “A fingerprint matching algorithm using phaseonly correlation,” IEICE Trans. Fundam. Electron. Commun. Comut. Sci. E87-A, 682–691 (2004).

10. Shoude Chang, Simon Boothroyd, Paparao Palacharla, and Sethuraman Pachanathan, “Rotation-invariant pattern recognition using a joint transform correlator,” Opt. Commun. 127, 107–116 (1984). [CrossRef]  

11. Vahid R. Riasati, Partha P. Banerjee, Mustafa A. G. Abushagur, and Kenneth B. Howell, “Rotation-invariant synthetic discriminant function filter for pattern recognition,” Opt.Eng. 39, 1156–1161 (2000). [CrossRef]  

12. Official website, “QR code.com,” http://www.denso-wave.com/qrcode/index-e.html.

13. J. Tanida and Y. Ichioka, “Optical logic array processor using shadowgrams,” J. Opt. Soc. Am. 73, 800–809 (1983). [CrossRef]  

14. S. Ito, T. Kanaoka, Y. Hamamoto, and S. Tomita, “An Algorithm for Classification of Fingerprints Based on the Core,” IECE , D-ll J73-D-∥ , 1733–1741 (1990) in Japanese.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Smart card holder authentication system which combines fingerprint verification with PIN verification by applying a double random phase encoding.
Fig. 2.
Fig. 2. Flow of double random phase encoding.
Fig. 3.
Fig. 3. The POC waveforms are shown. The left waveform is obtained by calculating the POC between same fingerprints, and the right one is the POC between different fingerprints.
Fig. 4.
Fig. 4. (a) The method of bit encoding into 2D image is shown. When the right square is white, these squares express zero, and when the left one is white, they express one. (b) An example image which is transformed from a character sequence “1234ABCD”. Tags are also installed at the upper left corner and the lower right corner of the bit pattern.
Fig. 5.
Fig. 5. Procedure of decoding a PIN data from the decrypted image.
Fig. 6.
Fig. 6. Workflow of the preprocessing.
Fig. 7.
Fig. 7. This figure shows the template image which is a part of an ellipse. The parameters which represent the characteristic of the arc are also shown.
Fig. 8.
Fig. 8. These images show the experimental result of template image generation, (a) optimized template image, (b) estimated cores, which are represented by black points.
Fig. 9.
Fig. 9. Result images are shown, (a) fingerprint image for enrollment, (b) encryption key image generated from (a), (c) original PIN image, (d) same individual’s fingerprint image for verification, (e) decryption key image generated from (d), (f) decrypted image by using (e), (g) different individual’s fingerprint image for verification, (h) decryption key image generated from (g), (i) decrypted image by using (h).
Fig. 10.
Fig. 10. The actual position differences between two fingerprints used in the experiment are plotted. These graphs also show the result of verification in the case of (a) without removing (b) with removing the significantly shifted fingerprints.

Tables (1)

Tables Icon

Table 1. Accuracy of experimental verification

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

f m ( x , y ) = f ( x , y ) exp [ jR ( x , y ) ] .
f r ( x d , y d ) = F m * ( u , v ) N ( u , v )
= f m * ( x d , y d ) * n ( x d , y d ) ,
g 1 ( x , y ) = 1 [ exp { j K E ( u , v ) } ] ,
g 2 ( x , y ) = 1 [ exp { j K D ( u , v ) } ] .
G E ( u , v ) = [ g E ( x , y ) ]
= A E ( u , v ) exp { j P E ( u , v ) } ,
G D ( u , v ) = [ g D ( x , y ) ]
= A D ( u , v ) exp { j P D ( u , v ) } ,
K E ( u , v ) = P E ( u , v ) ,
K D ( u , v ) = P D ( u , v ) .
n ( x d , y d ) { δ ( x d α , y d β ) ( correct fingerprint ) random sequence ( incorrect fingerprint ) ,
( x e , y e ) = ( x 1 x 2 , y 1 y 2 ) ,
Δ E = average ( x a x e ) 2 + ( y a y e ) 2 all training image pairs ,
BER = N Error N PIN ,
FRR = N FP N S ,
FAR = N FN N S ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.