Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic algorithm for the characterization of sweat ducts in a three-dimensional fingerprint

Open Access Open Access

Abstract

In this study, an automatic algorithm has been presented based on a convolutional neural network (CNN) employing U-net. An ellipsoid and an ellipse were applied for approximation of a three-dimensional sweat duct and en face sweat pore at the different depths, respectively. The results demonstrated that the length and the diameter of the ellipsoid can be used to quantitatively describe the sweat ducts, which has a potential for estimating the frequency of resonance in millimeter (mm) wave and terahertz (THz) wave. In addition, projection-based sweat pores were extracted to overcome the effect that the diameters of en face sweat pores depend on the depth. Finally, the projection-based image of sweat pores was superposed with a maximum intensity projection (MIP)-based internal fingerprint to construct a hybrid internal fingerprint, which can be applied for identification recognition and information encryption.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Biometrics techniques based on the information of a living body such as fingerprint, iris, vein and face have become promising approaches for ensuring user privacy, which are applied for identification recognition [1,2] and information encryption [35]. Considering those techniques on technical difficulties and cost factors, fingerprint becomes the preferred technique. To date, a commercialized optical fingerprint scanner has been widely used based on the frustrated total internal reflection (FTIR) at a right-angle prism or a direct collection of the skin’s surface of the finger [6,7]. However, the surface fingerprint is easily destroyed by dirty contamination and scars. Moreover, the fingerprint can be artificially replicated by the fingertip skin. Furthermore, the image quality of fingerprints is seriously degraded in wet conditions.

A variety of techniques have been developed to overcome the above problems, including polarization-resolved scattering light [8], capacitive fingerprint sensor combining with simultaneous detection of pressure and skin temperature [9], and ultrasonic [10] methods. The polarization-resolved-scattered-light based fingerprint scanner was insensitive to moisture and capable of identifying a fake synthetic fingerprint [8]. In addition, the capacitive fingerprint sensor combining with simultaneous detection of pressure and skin temperature could distinguish real and artificial fingerprints [9]. Finally, the ultrasonic fingerprint sensor worked well on wet fingers and was harder to be spoofed since it captured the fingerprint images at multiple depths of hundreds of microns [10]. However, these methods are difficult to acquire the fingerprint on the damaged skin surface of fingertips.

Fortunately, there exists an intermediate layer of skin, called the papillary junction, which is located at the junction between the epidermis and dermis. In our previous work, the fingerprint in this layer has the same structural features as the surface fingerprint [11], whose results are consistent with some findings [1214]. Recently, optical coherence tomography was applied for three-dimensional (3D) fingerprint [1519]. The typical 3D fingerprint is shown in Fig. 1(a) since it can image up to 1 mm beneath the skin surface, which completely covers the depth of the papillary junction. In order to differentiate the fingerprint of the skin surface obtained by conventional fingerprint sensors, the 3D fingerprint captured by OCT is called a subsurface fingerprint. The subsurface fingerprint images are constructed by en face OCT image at a specific depth or averaged over the several successive depths [2022]. But the subsurface fingerprint is variable with the depth and divided into three layers of fingerprint, including corneum, internal and papillary fingerprints [14].

 figure: Fig. 1.

Fig. 1. (a) 3D OCT images for fingerprint; (b) a typically cross-sectional OCT (x-z) image of fingertip skin and skin structure, (c) a surface fingerprint image in stratum corneum, (d) en face OCT image for sweat ducts between lines II and III, (e) an internal fingerprint image in DEJ

Download Full Size | PDF

In order to overcome this issue, the maximum intensity projection (MIP) of the dermal-epidermal junction (DEJ) region was proposed in our previous study [11]. However, the internal fingerprint does not include sweat pores, which was the third-level hierarchical order for the characterization of fingerprint of human fingertips [23]. This was because that the sweat ducts were located in the epidermis. In addition, size, shape, position and the number of the pores were quite distinctive and proposed for identification [2426], and the length and the diameter of 3D sweat ducts were significant for electromagnetic response in the interaction of the electromagnetic wave (including mm wave and THz wave) with human skin [27,28]. However, the characterization of sweat ducts in OCT images haven’t been investigated in detail.

In this study, an automatic algorithm has been developed for quick estimation of the size of 3D sweat ducts and en face sweat pores to overcome the above issues. The 3D sweat ducts in fingertip skin were segmented from epidermis employing convolutional neural network (CNN) and quantitatively described based on an ellipsoid shape. Then an ellipse approximated en face image of sweat pores at the different depths, and the projection-based image of sweat pores was superposed with surface fingerprint and internal fingerprint and maximum intensity projection (MIP)-based internal fingerprint to construct the hybrid internal fingerprints.

2. Methods

2.1 OCT system for the 3D structure of fingertip skin

In this work, a home built spectral-domain OCT was employed for capturing subcutaneous morphology of fingertip skin. The detailed specifications of the device can be found in [11]. The axial and lateral resolutions in the air are 8.9 µm and 18.2 µm, respectively. And a 3 mm thick glass window was placed for contact with the finger to flatten the curved fingertip surface as it did like fingerprint scanning with conventional fingerprint scanners. And the glass tilted at a small angle (<5◦) for reducing the noise and fixing the angle-dependent scan. Figure 1(a) showed a typical 3D fingerprint, which contains four hundred cross-sectional images as shown in Fig. 1(b). Each B-scan has an image size of 248 pixels×345 pixels. For the construction of a 3D image, 400 B-scan OCT images are acquired, in increments of 25-µm of the position of the light beam. Thus, 248 pixels×345 pixels×400 can be used to represent the OCT-based fingerprint images in 3D space and the corresponding areas of 1.4mm×8.5mm×10 mm. Fifteen volunteers took part in this experiment including nine males and six females, aged from 18 years to 40 years old. We performed the measurements on 10 fingers from each person result in 150 fingers data in total.

The surface fingerprint is located in the stratum corneum as shown in Fig. 1(c) between lines I and II, and the internal fingerprint is at the DEJ as shown in Fig. 1(e) between lines III and IV. The sweat ducts are distributed in the range between lines II and III as shown in Fig. 1(d).

2.2 Automatic algorithm for the characterization of sweat ducts

Figure 2(a) demonstrated a flowchart of an automatic algorithm to characterize sweat ducts segmented from the epidermis using CNN. Then an ellipsoid was used to approximate the 3D sweat duct, whose approximation is based on the same normalized second central moments as the region of the sweat duct. And an ellipse was applied for an approximation of en face sweat pore at the specific depth. Finally, a projection-based image of sweat duct was proposed for hybrid surface fingerprint and internal fingerprint.

 figure: Fig. 2.

Fig. 2. (a) flowchart for characterization of the sweat ducts. (b) cross-sectional OCT image of fingertip skin, (c) the boundaries of surface and the DEJ, and (d) the sweat ducts, (e) surface fingerprint, and (f) internal fingerprint.

Download Full Size | PDF

Firstly, the boundaries of surface and DEJ and the sweat ducts were individually segmented from the cross-section OCT images based on CNN employing U-net, which can be a good solution for the adaptive and automatic segmentation of OCT data. And its architecture consists of a contracting path and a symmetric expanding path [29], whose setting could be seen in our previous work [30]. Figure 2(b) show a typically cross-sectional OCT image of fingertip skin, which can be classified as three components containing background, sweat ducts and region of interest (ROI) for segmentation. ROI included stratum corneum, epidermis, and DEJ as shown in Fig. 2(c). The two cross-sectional OCT images were randomly selected from 400 images of each finger, and 300 images of 150 fingers were chosen for training. During training, the three components in the cross-sectional image were labeled manually image by image, and the image intensity values are normalized to [0 255]. These images were randomly split into training (70%) and test (30%) sets. As result, the testing accuracy is 0.95.

After segmentation based on CNN, the boundaries of the surface and DEJ were extracted from the top and bottom boundaries of ROI as shown in Fig. 2(c), which used to locate the surface fingerprint [Fig. 2(e)] and internal fingerprint [Fig. 2(f)], respectively. Meanwhile, the sweat ducts were segmented as shown in Fig. 2(d).

Secondly, 400 cross-sectional images of sweat ducts as shown in Fig. 2(d) constructed 3D sweat ducts as shown in Fig. 3(a). In the 3D sweat ducts, it was found to be a spiral structure [31,32]. The 3D sweat ducts were extracted by Frangi’s filter and threshold value [33]. And the regular spiral structure is quantitatively characterized with diameter D3, duct length L, spacing S and the number of turns N [27,28,34,35], and the ellipsoid with three principal axis lengths labeling with a, b, c is employed to quantitatively describe it as shown in Fig. 3(c). The ellipsoid approximation is based on the same normalized second central moments as the region of the sweat duct. In the regular spiral structure, the principal axis lengths a and c are equal, and D3=2a=2c = a + c, L=2b. However, it is hard to clearly distinguish the 3D spatially helical shape in OCT images segmented by CNN as shown in Fig. 3(a), which can differentiate the left or right in orthogonal-polarization-gating OCT [36]. Figure 3(a) demonstrated that the 3D sweat ducts are of irregular shape, the ellipsoid approximation can be applied for quantitative characterization of sweat ducts, in which the principal axis lengths a and c are usually not equal. A 3D sweat duct was chosen in Fig. 3(a), and Fig. 3(b) demonstrates that the ellipsoid is applied to quantitatively describe the shape of the sweat duct, the diameter D3 can be approximated as (a + c), and duct length L can be given by L=2b, which can be used for estimating the frequency of resonance in the interaction of the electromagnetic wave with finger skin.

 figure: Fig. 3.

Fig. 3. (a) 3D sweat ducts; (b) the ellipsoid with principal axis length with labeling a, b, c; (c) an ellipsoid approximation of the regular spiral structure with principal axis length with labeling a, b, c (d) an en face image of sweat ducts at the specific depth; (e) an ellipse with major axis length and minor axis length

Download Full Size | PDF

In addition, en face images of the 3D sweat ducts at the different depths are two-dimensional sweat pores. Figure 3(d) demonstrates that the en face image of sweat ducts at the specific depth become pores of different sizes. An ellipse as shown in Fig. 3(e), which has the same normalized second central moments as the sweat pore, is used to characterize the sweat pore. The size of the ellipse was characterized by major axis length Lmax and minor axis length Lmin. The diameter D2 can be approximated as D2=(Lmax+Lmin)/2, which was applied for studying the depth-dependent sweat pores.

Finally, the 3D sweat ducts performed a projection into en face sweat pores, and the algorithm of projection is adding the gray value of the en face image of sweat pores at the different depth along the depth direction and setting the value of gray value greater than 1 to 1 to form projection-based sweat pores. The projection-based sweat pores are independent of depth, which is superposed with surface fingerprint and internal fingerprint to construct the corresponding hybrid fingerprints. Furthermore, the projection-based image of sweat pores is superposed with an MIP-based internal fingerprint, which is extracted by MIP image of DEJ and has a more accurate structural similarity and more robust operation by quantitative analysis since it is independent of the depth and not sensitive to the states of surface skin as demonstrated in our previous study [11].

3. Results and discussions

The shape of the 3D sweat duct is quantitatively described with an ellipsoid. Figure 4(a) demonstrated that the distribution of duct length L in the fingertip skin is from 154.0 µm to 395.4 µm, whose average value is 261.0 ± 43.5 µm. The average length of duct is in the range from the shortest ducts of 157 ± 46 µm to the longest ducts of 600 ± 294 µm, and consistent with that in the fingertip [35]. In addition, Fig. 4(b) indicated that the distribution of duct diameter D is from 57.0 µm to 140.4 µm, and its average diameter is 109.0 µm ± 16.7 µm, which slightly larger than that of 95 ± 11µm in [35] and slightly smaller than that of 140 µm applied for axial-mode resonance frequency [28].

 figure: Fig. 4.

Fig. 4. The distribution of (a) ducts’ length and (b) diameter

Download Full Size | PDF

It has been reported that the human sweat ducts play an important role in the interaction of the electromagnetic wave with human skin, including mm wave and THz [27,37] wave. The frequency of resonance f of fingertip skin is determined by f∼(c/n)*2S/[(πD3)2] [34]. Here, c is the velocity of light, n is the refractive index of the skin, and S is the spacing between the turns. Although S of the sweat ducts cannot acquire in this work, S can be computed as L/N. Here N is the number of turns, and the duct length is proportional to the number of turns [35]. The sweat ducts function as antennas, and the number of turns must be higher than three [35]. Thus, the minimum length of sweat ducts can be seen as three turns. In this study, the length L and diameter D3 of the sweat ducts are estimated based on an ellipsoid approximation for quantitative estimation of the spectral response.

An en face image of a 3D ellipsoid sweat duct was the shape of an ellipse, so en face image of 2D sweat pores at the different depths as shown in Fig. 5. Figure 5(e)-(h) demonstrated that the corresponding diameter distribution varies with depth. And the average diameter varies with the depths as shown in Fig. 6(a). In addition, since that the sweat pores are often considered distributed along ridges, Figs. 5(a), (b) and (d) indicated that there are absences of some sweat pores in some regions labeled with circles. Figure 5(c) showed a complete display of the sweat pores distributed along the ridge, where the en face image is at the depth location of the peak at the distribution of average diameter.

 figure: Fig. 5.

Fig. 5. Depth-dependent en face sweat ducts at the different depth of (a) 183 µm, (b) 244 µm, (c) 305 µm, (d) 365µm, and (e)-(h) the corresponding diameter distribution.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. (a) average diameter of sweat ducts dependent on depth, (b) projection-based sweat ducts, and (c) diameter distribution of projection-based sweat ducts

Download Full Size | PDF

In order to overcome the effects of sweat pores dependent on depth and absence of sweat pores, projection-based sweat pores were proposed to transfer the 3D sweat ducts into two-dimensional sweat pores as shown in Fig. 6(b). The diameter’s distribution of projection-based sweat pores was revealed in Fig. 6(c), and its average diameter is 199.1 ± 36.3 µm, which is slightly larger than that at the peak shown in Fig. 6(a).

The hybrid surface Fingerprint superposed with the projection-based image of sweat pores of a human fingertip is similar to traditional fingerprint and generally characterized in a hierarchical order at three levels as shown in Fig. 7(a). The first level is the pattern of fingerprint ridges, the second level includes minutia points such as ridge bifurcations and endings labeled with blue circles, and the third level contains pores and ridge contours. However, the surface fingerprint is easy to be damaged. Thus, the internal or subsurface fingerprint was proposed to overcome the issue. But in this work, the internal fingerprint at the boundary of DEJ is fuzzy, and it is hard to distinguish minutia as shown in Fig. 7(b). The maximum intensity projection (MIP) image [11] of the epidermal-dermal junction (DEJ) was presented to extract the internal fingerprint as shown in Fig. 7(c). MIP-based internal fingerprint superposed with the projection-based image of sweat pores also contains three-level features.

 figure: Fig. 7.

Fig. 7. Hybrid (a) surface fingerprint, (b) internal fingerprint and (c) MIP-based internal fingerprint superposed with projection-based sweat pores. Blue circles denote minutia.

Download Full Size | PDF

The hybrid MIP-based internal fingerprints superposed with the projection-based image of sweat pores is based on internal fingerprint and 3D sweat ducts, which were individually segmented by CNN to construct complete fingerprint information. Since MIP-based internal fingerprints based on MIP image of DEJ and projection-based sweat pores are not sensitive to depth, the hybrid fingerprint is independent of the depth, which is important for identification recognition and information encryption. In addition, its advantage is that the segmentation is automatic and its accuracy is pixel-wise.

Other hybrid fingerprints were reconstructed by the three layers of subsurface fingerprints. The three parts are separated by peak and troughs of one single A-line, which was accumulated by all the A-lines in a B-scan image [20]. But the algorithm results in boundary inaccuracy due to inhomogeneity of fingertip skin. In addition, two empirical values are needed to increase the image contrast in their superposition algorithm. Thus, the hybrid MIP-based internal fingerprint is superior to the superposition algorithm for the three layers of subsurface fingerprints.

4. Conclusions

We have developed an automatic algorithm to quantitatively described sweat ducts in fingertip skin based on optical coherence tomography. Firstly, 3D sweat ducts are segmented from the epidermis employing CNN. Then an ellipsoid and an ellipse are applied for approximation of 3D sweat ducts and en face sweat pores, respectively. And the length and the diameter are used for quantitative characterization of sweat ducts, which has a potential application for the frequency of resonance in mm wave and THz wave. In addition, the diameters of sweat pores depend on the depth, and projection-based sweat pores can overcome this issue. Finally, the projection-based image of sweat pores is superposed with a MIP-based internal fingerprint to construct a hybrid internal fingerprint, which is applied for identification recognition and information encryption.

Funding

National Natural Science Foundation of China (61875038).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. K. Prasad and P. S. Aithal, “Literature review on fingerprint level 1 and level 2 features enhancement to improve quality of image,” IJMTS 2(2), 8–19 (2017). [CrossRef]  

2. C. Shu and X. Ding, “Multi-biometrics fusion for identity verification,” Proceedings - International Conference on Pattern Recognition, 493 (2006). [CrossRef]  

3. Y. Su, W. Xu, and J. Zhao, “Optical image encryption based on chaotic fingerprint phase mask and pattern-illuminated Fourier ptychography,” Optics and Lasers in Engineering 128, 106042 (2020). [CrossRef]  

4. T. Zhao, Q. Ran, L. Yuan, Y. Chi, and J. Ma, “Image encryption using fingerprint as key based on phase retrieval algorithm and public key cryptography,” Optics & Lasers in Engineering 72, 12–17 (2015). [CrossRef]  

5. F. G. Hashad, O. Zahran, E. S. M. EI-Rabaie, I. F. Elashry, and F. E. A. Ei-Samie, “Fusion-based encryption scheme for cancelable fingerprint recognition,” Multimed Tools Appl 78(19), 27351–27381 (2019). [CrossRef]  

6. A. Kumar and C. Kwong, “Towards contactless, low-cost and accurate 3D fingerprint identification,” IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 681–696 (2015). [CrossRef]  

7. R. D. Labati, A. Genovese, V. Piuri, and F. Scotti, “Toward unconstrained fingerprint recognition: a fully touchless 3-D system based on two views on the move,” IEEE Trans. Syst. Man Cybern, Syst. 46(2), 202–219 (2016). [CrossRef]  

8. S. W. Back, Y. G. Lee, S. S. Lee, and G. S. Son, “Moisture-insensitive optical fingerprint scanner based on polarization resolved in-finger scattered light,” Opt. Express 24(17), 19195 (2016). [CrossRef]  

9. J. Zhu, X. Yang, X. Meng, Y. Wang, Y. Yin, X. Sun, and G. Dong, “Computational ghost imaging encryption based on fingerprint phase mask,” Opt. Commun. 420, 34–39 (2018). [CrossRef]  

10. X. Jiang, H. Y. Tang, Y. Lu, E. J. Ng, J. M. Tsai, B. E. Boser, and D. A. Horsley, “Ultrasonic fingerprint sensor with transmit beamforming based on a PMUT array bonded to CMOS circuitry,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 64(9), 1401–1408 (2017). [CrossRef]  

11. Z. Zhong, J. L. Zhang, Z. Li, Y. Lin, and S. Wu, “Depth-independent internal fingerprint based on optical coherence tomography,” Opt. Express 29(11), 16991–17000 (2021). [CrossRef]  

12. B. Ding, H. Wang, P. Chen, Y. Zhang, Z. Guo, J. Feng, and R. Liang, “Surface and internal fingerprint reconstruction from optical coherence tomography through convolutional neural network,” IEEE Trans.Inform.Forensic Secur. 16, 685–700 (2021). [CrossRef]  

13. L. N. Darlow and J. Connan, “Efficient internal and surface fingerprint extraction and blending using optical coherence tomography,” Appl. Opt. 54(31), 9258–9268 (2015). [CrossRef]  

14. L. N. Darlow and J. Connan, “Study on internal to surface fingerprint correlation using optical coherence tomography and internal fingerprint extraction,” J. Electron. Imaging 24(6), 063014 (2015). [CrossRef]  

15. A. Bossen, R. Lehmann, and C. Meier, “Internal fingerprint identification with optical coherence tomography,” IEEE Photonics Technol. Lett. 22(7), 507–509 (2010). [CrossRef]  

16. J. Aum, J. H. Kim, and J. Jeong, “Live acquisition of internal fingerprint with automated detection of subsurface layers using OCT,” IEEE Photonics Technol. Lett. 28(2), 163–166 (2016). [CrossRef]  

17. E. Auksorius and A. C. Boccara, “Fingerprint imaging from the inside of a finger with full-field optical coherence tomography,” Biomed. Opt. Express 6(11), 4465–4471 (2015). [CrossRef]  

18. R. Breithaupt, C. Sousedik, and S. Meissner, “Full fingerprint scanner using optical coherence tomography,” Proceedings of the International Workshop on Biometrics and Forensics, (2015).

19. Y-W. Hsu, E. J-D Lee, L. Tseng, and Liu, “A novel binary method for fingerprint optical coherence tomography images, ” IEEE 3rd Global Conference on Consumer Electronics, 144–145 (2014)

20. F. Liu, G. Liu, Q. Zhao, and L. Shen, “Robust and high-security fingerprint recognition system using optical coherence tomography,” Neurocomputing 402, 14–28 (2020). [CrossRef]  

21. F. Liu, C. Shen, H. Liu, G. Liu, Y. Liu, Z. Guo, and L. Wang, “A flexible touch-based fingerprint acquisition device and a benchmark database using optical coherence tomography,” IEEE Trans. Instrum. Meas. 69(9), 6518–6529 (2020). [CrossRef]  

22. Y. Cheng and K. V. Larin, “In vivo two- and three-dimensional imaging of artificial and real fingerprints with optical coherence tomography,” IEEE Photonics Technol. Lett. 19(20), 1634–1636 (2007). [CrossRef]  

23. A. K. Jain, C. Yi, and M. Demirkus, “Pores and ridges: high-resolution fingerprint matching using level 3 features,” IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 15–27 (2007). [CrossRef]  

24. V. Bhagwat, D. M. Kumar, and K. Naga, “Poroscopy -the study of sweat pores among central Indian population,” SIJAP 3(6), 53–56 (2020). [CrossRef]  

25. B. Wijerathne, “Poroscopy: an important research field in medicine and physical anthropology,” Anuradhapura Med. J. 9(2), 44–46 (2015). [CrossRef]  

26. M. Tafazoli, N. M. Shahri, H. Ejtehadi, F. Haddad, H. J. Nooghabi, M. M. Shahri, and S. Naderi, “Biological Variability of Sweat Gland Pores in the Fingerprints of a Fars Iranian Family from Khorasan Razavi Province, Iran,” Anatomical Sciences Journal 10(2), 99–104 (2013).

27. Y. Feldman, A. Puzenko, P. B. Ishai, A. Caduff, and A. J. Agranat, “Human skin as arrays of helical antennas in the millimeter and submillimeter wave range,” Phys. Rev. Lett. 100(12), 128102 (2008). [CrossRef]  

28. I. Hayut, A. Puzenko, P. B. Ishai, A. Polsman, A. J. Agranat, and Y. Feldman, “The Helical Structure of Sweat Ducts: Their Influence on the Electromagnetic Reflection Spectrum of the Skin,” IEEE Trans. THz Sci. Technol. 3(2), 207–215 (2013). [CrossRef]  

29. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, 9351. N. Navab, J. Hornegger, W. Wells, and A. Frangi, eds. Springer, Cham.

30. Y. Lin, D. Li, W. Liu, Z. Zhong, Z. Li, Y. He, and S. Wu, “A measurement of epidermal thickness of fingertip skin from OCT images using convolutional neural network,” J. Innovative Opt. Health Sci. 14(01), 2140005 (2021). [CrossRef]  

31. S. Takagi and M. Tagawa, “Predominance of right-handed spirals in human eccrine sweat ducts,” Jpn. J. Physiol. 5(2), 122–130 (1955). [CrossRef]  

32. S. Takagi and M. Tagawa, “Predominance of right-handed spirals in the intraepidermal sweat ducts in man and the primates,” Jpn. J. Physiol. 7, 113–118 (1957). [CrossRef]  

33. S. Shuang and Z. Guo, “Sweat Glands Extraction in Optical Coherence Tomography Fingerprints,” International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), 579–584 (2017).

34. S. R. Tripathi, P. B. Ishai, and K. Kawase, “Frequency of the resonance of the human sweat duct in a normal mode of operation,” Biomed. Opt. Express 9(3), 1301 (2018). [CrossRef]  

35. S. R. Tripathi, E. Miyata, P. B. Ishai, and K. Kawase, “Morphology of human sweat ducts observed by optical coherence tomography and their frequency of resonance in the terahertz frequency region,” Sci Rep 5(1), 9071 (2015). [CrossRef]  

36. D. Li, Z. Li, J. Zhang, K. Li, S. Wu, Y. He, and Y. Lin, “Orthogonal-polarization-gating optical coherence tomography for human sweat ducts in vivo,” J. Biophotonics 14, e202000432 (2021). [CrossRef]  

37. N. Michael and I. Abdulhalm, “Does human skin truly behave as an array of helical antennae in the millimeter and terahertz wave ranges ?” Opt. Lett. 35(19), 3180–3182 (2010). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) 3D OCT images for fingerprint; (b) a typically cross-sectional OCT (x-z) image of fingertip skin and skin structure, (c) a surface fingerprint image in stratum corneum, (d) en face OCT image for sweat ducts between lines II and III, (e) an internal fingerprint image in DEJ
Fig. 2.
Fig. 2. (a) flowchart for characterization of the sweat ducts. (b) cross-sectional OCT image of fingertip skin, (c) the boundaries of surface and the DEJ, and (d) the sweat ducts, (e) surface fingerprint, and (f) internal fingerprint.
Fig. 3.
Fig. 3. (a) 3D sweat ducts; (b) the ellipsoid with principal axis length with labeling a, b, c; (c) an ellipsoid approximation of the regular spiral structure with principal axis length with labeling a, b, c (d) an en face image of sweat ducts at the specific depth; (e) an ellipse with major axis length and minor axis length
Fig. 4.
Fig. 4. The distribution of (a) ducts’ length and (b) diameter
Fig. 5.
Fig. 5. Depth-dependent en face sweat ducts at the different depth of (a) 183 µm, (b) 244 µm, (c) 305 µm, (d) 365µm, and (e)-(h) the corresponding diameter distribution.
Fig. 6.
Fig. 6. (a) average diameter of sweat ducts dependent on depth, (b) projection-based sweat ducts, and (c) diameter distribution of projection-based sweat ducts
Fig. 7.
Fig. 7. Hybrid (a) surface fingerprint, (b) internal fingerprint and (c) MIP-based internal fingerprint superposed with projection-based sweat pores. Blue circles denote minutia.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.