Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A pyroelectric infrared biometric system for real-time walker recognition by use of a maximum likelihood principal components estimation (MLPCE) method

Open Access Open Access

Abstract

This paper presents a novel biometric system for real-time walker recognition using a pyroelectric infrared sensor, a Fresnel lens array and signal processing based on the linear regression of sensor signal spectra. In the model training stage, the maximum likelihood principal components estimation (MLPCE) method is utilized to obtain the regression vector for each registered human subject. Receiver operating characteristic (ROC) curves are also investigated to select a suitable threshold for maximizing subject recognition rate. The experimental results demonstrate the effectiveness of the proposed pyroelectric sensor system in recognizing registered subjects and rejecting unknown subjects.

©2007 Optical Society of America

1. Introduction

With the recent advances in optical and digital technologies, novel sensors, and matching algorithms, a variety of biometric systems have attracted increasingly research attention. A functional biometric system requires the specific human characteristics in use to be universal among all human subjects under examination, distinctive between any two human subjects, characteristic invariant over a period of time, and feasible for quantitative measures [1].

In conventional biometric systems, the complex structure of certain body parts of each subject, such as a human iris, fingerprints, facial, or hand geometry, are measured optically, analyzed digitally, and converted into a digital code. When a human walks, the motion of various components of the body, including the torso, arms and legs, produces a characteristic signature. From the thermal perspective, each person acts as a distributed infrared source whose thermal distribution is determined by his/her geometric shape and the IR emission from the body. The average human frame radiates about 100 W/m2 of power [2, 3], peaking at 9.55 μm [4]. A constant heat exchange between a human body and the environment is caused by the temperature difference. Combined with the idiosyncrasies in how individuals move themselves, their body heat impacts a surrounding sensor field in a unique way.

The pyroelectric infrared (PIR) sensor has a high detection capability for IR radiation and has been used for a wide range of applications [5–8]. The PIR detector used for this work is low cost, $ 2 per piece, has low power consumption, 2 mW, and is sensitive in a range of 5∼14 μm [9]. In this study, we used the dual element PIR sensors. A summary of different parameters of this PIR detector is shown in Table 1. The dual element detectors have the inherent advantage that the output voltage is the difference between the voltages obtained from each of the elements of the detector, such that the impact of environmental temperature fluctuation can be neutralized [9 ]. In Ref. [8], the pyroelectric sensors were used for vehicle detection. Therefore, the performance of this dual element PIR sensor system is robust to the environmental temperature, and suitable for both indoor and outdoor working environments. However, each detector element has a small detection area (2 mm 2), the amount of the heat collected by the sensor is only a small fraction of the incident thermal radiation. In this system, a Fresnel lens array has been employed to improve both the collection efficiency and spatial resolution of the sensor [10].

In Ref. [11], we proposed a concept to use a pyroelectric IR sensor and a Fresnel lens array for human identification. We also developed a prototype real-time walker identification system that can discriminate the registered subjects in both path-dependent and path-independent modalities [12]. However, it cannot reject unknown intruders because its identification algorithm is based on statistics of the compressed data, discrete events, of a sensor array that inevitably discards quite an amount of discriminative information. In this paper, we have developed a prototype real-time walker recognition system that can detect the unknown intruders. To increase the detection rate for not only registers but also unknown intruders, during the data training, we employed a maximum likelihood principal components estimation (MLPCE) method to cluster feature data with respect to the different registered subjects walking at different speed levels. This method provides a general way to incorporate a variety of error structures into the clustering problem. The MLPCE can be sub-divided into two steps: a principal components analysis (PCA) followed by a maximum likelihood estimation (MLE) [13, 14]. In the real-time testing phase, the feature data of subjects walking along the same path used in training, yet at random speeds, were tested against the pre-trained feature clusters to determine their identities. For such a detection system, the selection of a threshold for each registered subject becomes an important issue. Receiver Operating Characteristic (ROC) curve analysis is hence employed. The ROC analysis has been used in signal detection [15], medical diagnostics [16, 17] and machine learning [18, 19] to optimize the decision thresholds. The tradeoff between recognition rate and false alarm using different thresholds is visualized in a ROC curve. By using the ROC plots, we can select a suitable rejection threshold for a group of registered subjects.

Tables Icon

Table 1. Summary of parameters of PIR detector

2. Analysis

 figure: Fig. 1.

Fig. 1. The diagram of the recognition process

Download Full Size | PDF

Figure 1 outlines the recognition process which contains training and testing phases. In the training stage, we would like to find a regression vector R, such that the identity of unknown feature data can be estimated by an inner product of vector R and the feature data F, i.e.,

M=FR.

However, in reality, several measurements of the same quantity on the same subject will not in general be the same. This may be because of detected signal with noise, natural variation in the subject, or variation in the measurement process. A general calibration model then is

M=F˜R+ε,

where F̃ is the error free feature data and residual e is the measurement error vector having the same dimension as M. In this paper, the MLPCE method will be utilized to find an optimum regression vector R that maps feature data into the decision plane.

MLPCE can be divided into two steps: a principal components analysis (PCA) followed by a maximum likelihood estimation (MLE). PCA is a spectral decomposition of the matrix F, retaining only the factors that have large values. The remaining factors associated with small values are assumed to be noise, and therefore omitted from the regression phase. The singular value decomposition (SVD) of a spectral matrix F can be represented by

Fm×n=Um×mm×nVn×nT,

where the U and V are orthogonal matrices, m is the number of samples, n is the number of points of one feature signal data and ∑ is diagonal with nonnegative singular values in descending order.

The spectrum matrix F can be approximated by its first k singular values, assuming singular values whose order is larger than k are negligible. k is typically determined by cross-validation. The remaining factors associated with small singular values are assumed to be from noise. The resulting truncation gives:

FF˜k=U˜m×k˜k×kV˜n×kT,

with km,n.

The spectrum matrix F also can be defined as

FPJT,

Where P m×k = Ũm×k ˜ k×k, J = Ṽn×k, FJ = P.

P is the score matrix and J is the factor matrix. In other words, J can be viewed as a new set of orthogonal coordinates spanning the inherent (true) dimensionality of the feature data matrix F, and P is the projection (scores) of F onto the new coordinate system. For convenience, we will call it k-space.

Once we obtain the underlying factor and their corresponding scores, MLE is performed to find regression vector R̂k×1 in k-space. Hence, Eq. (2) can be written as

Mm×1=Pm×kR̂k×1+ε̂m×1.

In the classification process, the feature data is first projected onto those factors obtained during training, and the resulting scores are correlated with the calibration vector obtained by MLE in k-space. We see from Eq. (6) that the measurement contains random error ε^ , rendering it random too. The error could be Gaussian or Poisson. Here, we further assume that measurement error is zero mean Gaussian, i.e., ε^ ∼ ℕ(0,C ε^), where C ε^ is the covariance matrix. The maximum likelihood method tries to find a regression vector, which maximizes the conditional probability. Given a vector M of length m of observed measurements and the covariance matrix C ε^ of Gaussian noise, the multivariate probability density function at M is given by

Q=1(2π)m/2Cε̂1/2exp[12(MPR̂)TCε̂1(MPR̂)],

where ∣C ε^∣ is the determinant of C ε^.

The maximum likelihood estimate R̃MLE is the one that maximize Eq. (7) for given measurement M. In other words, when R̃ = R̃MLE in Eq. (7), the measurement M is most likely to be observed. In fact, maximizing the above probability density function is equivalent to minimizing the function

Q'=(MPR̂)TCε̂1(MPR̂).

Since Q’ in Eq. (8) is quadratic, R̃MLE must satisfy the following equation

R̂[(MPR̂)TCε̂1(MPR̂)]|R̂=R̂MLE=0.

From Eq. (9),

PTCε̂1(MPR̂MLE)=0.

The maximum likelihood estimation of regression vector R̃MLE in k-space is

R̂MLE=(PTCε̂1P)1PTCε̂1M.

Finally, from the Eq. (5) and Eq. (11) the regression vector R can be written as follows

Rn×1=Jn×kR̂k×1,MLE
=V˜n×k(PTCε̂1P)1PTCε̂1M
=V˜n×k[(U˜m×k˜k×k)TCε̂1(U˜m×k˜k×k)]1(U˜m×k˜k×k)TCε̂1Mm×1.

After the training phase, we obtain N regression vectors if there are N registered subjects. For the unknown intruders, we use a hypothesis H 0 to represent “none-of-them”. To allow the multiple hypothesis testing to accommodate unknown subjects, we modified the multiple hypothesis model used in our previous paper [11]. In this real-time recognition system, multiple hypothesis model will be built for each registered subject with mean and covariance of the clustered training data of that register subject, [μ1i,μ2i,...,μKi] and [C1i,C2i,...,CKi] , where i is the i-th register subject, K is the number of clusters. The mean can be calculated by

μ=W¯=l=1nW1n,

where W is used to refer to the entire set of a cluster and W̄ is used to indicate the mean of the set W. The subscripts on the symbol W are used to indicate a specific element in the set and the symbol n is used to refer to the number of elements in the set W. The covariance matrix for a set of data with two dimensions is

C=[cov(p,p)cov(p,q)cov(q,p)cov(q,q)].

Each entry in the matrix is the result of calculating the covariance between two separate dimensions. The formula for covariance can be written by

cov(X,Y)=l=1n(XlX¯)(YlY¯)(n1).

Therefore, we will have N+l hypotheses, {H 0, H 1, …, HN}, to test an unlabeled measurement result vector m. The decision rule then is

m{H0,ifmaxi{p(mHi)}<γiHi:i=argmaxi{p(mHi)}otherwise,

where γi is a selected rejection threshold for each registered object and the probability density can be calculated by

p(mHi)=1(2π)Ci1/2exp[12(mμi)TCi1(mμi)].

3. Experimental results

 figure: Fig. 2.

Fig. 2. A sensor module (including a PIR detector, a Fresnel lens arrary, Texas Instrument micro-controller (MSP430149) and RF transceiver (TRF6901) module)

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The experiment setup for real-time walker recognition.

Download Full Size | PDF

This real-time walker recognition system was implemented by using the TI’s micro-controller (MSP430149) and RF transceiver (TRF6901) module. A sensor module is shown in Fig. 2. The sensory data are processed on the embedded micro-controller and then transmitted to the host computer. More details about the computation and communication platform implementation can be found in Ref. [7]. Figure 3 illustrates the experiment setup. A sensor module, which contains a pyroelectric IR sensor and a Fresnel lens array, is mounted on a pillar at a height of 80cm to detect the IR radiation from the subject. The sensory data was collected while different persons walked back and forth along a prescribed straight path, 3m away from and perpendicular to the sensor. The range of vertical field of view of the sensor module is 62~126 cm from the ground. Within this range, the sensor module can detect IR radiation from torso, arms, and legs of normal height human being at the same time. More detail discussion can be found in our previous paper [11].

An important aspect of a human recognition system is to choose a suitable feature that discriminates individuals. Figure 4 shows the flow chart of the real-time feature extraction. It consists of three parts: event detection, feature extraction and feature validation. As one event happens, its data will be retrieved at once. The length of the event data is checked first to reject trivial events. In the process, a fast Fourier transform was utilized to generate the feature data. This feature is also checked against the universal background model to make sure of its validity before being tested against all the hypotheses. Figure 5 shows the process of event window detection. We obtained the windowed power spectrum density (WPSD) of the sensory data by using a windowed discrete Fourier transform (WDFT). Then, the signals can be further digitalized by threshold testing. Finally, the event window was created for each event data. In this example, the row data contains four event data sets. Once a window was formed, the corresponding event data would be retrieved immediately.

 figure: Fig. 4.

Fig. 4. Flow chart of real-time feature extraction.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Event window detection from windowed power spectrum density of sensory data. (a) Raw data. (b) WPSD of the raw data. (c) Digitized signals. (d) Event windows.

Download Full Size | PDF

Figure 6 shows two event data sets generated by two different individuals walking across the field of view of the sensor. The corresponding spectral features are shown in Fig. 7. It can be seen that the features generated by two people walking at a similar speed are different. Meanwhile, for the same person, different speeds also produce spectral differences and hence we have to take the effects of speed into account. During the training, 120 data sets were collected for each person walking back and forth along a fixed-path at 3 different speed levels, namely fast, moderate, and slow, all within daily walking habits. The features of two human subjects are displayed in Fig. 8. Each column displays the feature data collected at the different walking speeds. Each subfigure contains 40 superimposed data sets which were gathered from 20 repetitive independent back and forth walks. From the degree of the feature overlap, we can see that the repeatability of the features generated by the same person at the same speed is high.

In the training stage, we clustered all 120 data sets from each registered subject into 3 clusters for three speed levels. Since we know the label (subject identity and walking speed) of each data set, the clustering process can be viewed as supervised training. Accordingly, we can map these 3 clusters to 3 points equally distributed along a circle by linear regression. The resultant regression vector for each registered subject, obtained from MLPCE, defines the boundary between the data sets. The covariance matrix C ε̂ we used for forming the regression vector is diagonal and the standard deviation for each diagonal element is 0.1. To determine the dominant factors k used in each of the models, leave-one-out cross-validation was used. In this approach, the calibration model is constructed using all but one sample in the calibration data set, which is then predicated with the model. This procedure is repeated for all feature data during training. The number of factors can be decided by calculation the root mean squared error of prediction (RMSEP) as a function of k. The RMSEP is computed as:

RMSEP=i=1n(MiM̂i)2n=i=1nei2n,

where Mi is the actual value of M for feature data i and M̂i is the value for feature data i predicted with the model under evaluation, ei is the residual for feature data i(the difference between the predicted and the actual M-value) and n is the number of feature data for which M̂ is obtained by prediction. It is usually a convex function, and we are looking for the optimal dominant number k where the minimum occurs.

Using different k in constructing regression vector yields different prediction results. We are looking for the number of factors where the minimum occurs. As shown in Fig. 9, for y-axis mapping, when k equals to 10 for construction the regression vector can achieve the best prediction result for Jason’s data. However, the optimal number of dominant factors may vary for different data structures. The best prediction result for Bob’s data is obtained when k equals to 12. After the procedure of selecting the factor k, we can construct optimum regression vector for each registered subject. Figure 10 shows the clustering results for the data sets in Fig. 8. The contours of the probability density distributions (pdfs) associated with these clusters ranging from 0.1 to 1, are also illustrated in the figure. We further test the performance of regression vector when the signal with increasing noise. The range of standard deviation for additive detector noise is from 10-6∼10-5. We test 120 signal sets gathered from Jason walking at three speed levels. Figure 11 illustrates the RMSEP results for different value of additive noise. The RMSEP increases from 0.2243 to 0.2558 with increasing the value of standard deviation from 10-6∼10-5.

 figure: Fig. 6.

Fig. 6. Two event data sets generated by two different individuals.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The spectral features for two different individuals derived from the event data in Fig. 6.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Each column is for different speed levels. Each subfigure contains 40 superimposed data sets.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Results for leave-one-out cross-validation of calibration data

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The supervised clustering results upon 3 labels for 120 data sets with contours of the probability density distributions. (a) From Jason“s training data. (b) From Bob’s training data.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The RMSEP results for different value of additive noise.

Download Full Size | PDF

Another important issue of this real-time, walker recognition system for multiple hypothesis testing, is the selection of a threshold γ. In detection theory, if the output is above the threshold, the test is said to be positive, indicating that the target is present. For the target detection, a correct classification is called true, while an incorrect classification is called false. For example, if a registered subject walks across the FOV of the sensor system, and the test properly detects the condition, it is said to be a true-positive. On the other hand, if a registered subject does not walk across the FOV of the sensor system, but the test erroneously indicates that he/she appears, it is said to be a false-positive. For the threshold selection, a larger value of γ can reject unregistered subjects with higher rates, yet a smaller value of γ can achieve lower errors in detecting the presence of registered subjects. In order to select an appropriate γ, ROC curve can be utilized. Each point on the ROC curve corresponds to a different rejection threshold. The tradeoff, at different thresholds, between obtaining more true positives, at the expense of additional false positives, is visualized in an ROC curve by plotting the tradeoff for every possible threshold.

For the demonstration of this real-time recognition system, four people were registered in the data base of the recognition system. Figure 12 shows the ROC plots of the four registered subjects, Jason, Bob, Doris, and Jane. The rejection threshold is largest at the starting point of an ROC curve, i.e. the true-positive and false-positive rates are zero. At the endpoint of an ROC curve the rejection rate is zero, true-positive and false-positive rate sum up to 100%. As indicated in the figure, an optimal γ lies on the equal error rate (EER) line, where false alarms equals miss probabilities. The miss probability also named false rejection rate (FRR=1-true-positive rate), is the percentage of authorized individuals rejected by the system [15]. After selecting a γ for each registered subject, the data of six unregistered people imitating the speeds and gaits of the four registered subjects were collected and tested. The recognition results are summarized in Table 2. It can be seen that in recognition among 4 registered subjects the average recognition rate is 82.5%. On the other hand, the average recognition rate for 6 unregistered intruders is 78.3%. If a recognition system can only identify the walker that belongs to a predefined set of known walkers, it is referred to as closed-set identification. By adding a ‘none-of-the-above’ option to closed-set identification we obtain open-set identification [20]. Compared with our previous results [12], this system can achieve open-set identification. We also computed the overall false-positive and true-positive by averaging over all the registered subjects and then choose the threshold for the system. The average recognition rate for registered subjects has been decreased from 82.5% to 78.75%.

 figure: Fig. 12.

Fig. 12. ROC curves of four registered people

Download Full Size | PDF

Tables Icon

Table 2. The recognition results of 4 registered and 6 unregistered subjects. During the experiment, each subject walks 20 rounds along a fixed path. The detection of unregistered subject yields a report of “Others”.

4. Conclusion

This paper presented a new biometric system for real-time walker recognition using a pyroelectric IR sensor and a Fresnel lens array. This real-time system was implemented by using the Texas Instrument micro-controller (MSP430149) and RF transceiver (TRF6901) module. This system has low cost, low power consumption, and illumination independence. The procedure for real-time feature extraction and the improved multiple hypothesis testing algorithm utilized are also described. In the training stage, MLPCE is used to create a regression vector for each registered subject, and ROC curves are studied for selecting suitable thresholds to maximize acceptance rates and minimize rejection errors. The experimental results demonstrate the open-set recognition capability of the system for a small group of 10 subjects (4 registered subjects and 6 unregistered intruders).

In our previous paper [12], PIR detector arrays are used for generating digital sequential data to represent human motion features. The advantages of this HMM based system are in its less rigid training process, decreased sensitivity to walking speeds, and effectiveness in the path-independent identification mode. In this study, the analog feature of a PIR sensor’s temporal signal is used to represent the human motion features. It contains more detailed information about the thermal source, hence suitable for higher-security applications in human biometric verification and open-set identification.

This human recognition system is based on the IR radiation from the human body. Among all the factors that affect the human heat radiation, the cloth that walkers wear is the most important one. From the experiment results, the system recognition capability is invariant to the clothes with similar style and kind of fabric. However, a person wearing clothes with different kind of fabrics (e.g., a cotton garment for training and a polyester one for testing) will degrade the recognition rate. To alleviate this limitation, it may need the help of using multiple sensor nodes to get more information of subject from multiple perspectives or using multi-modal sensing technique after combining the conventional video devices with the pyroelectric sensors. However, the complexity of the system will be enhanced. It is a compromise between the recognition rate and complexity of the system. Besides, different weather situations (wind, rain snow, et al.) may influence the recognition rate. To find an effective way to offset those factors will be the aim of our following work. Our future work also includes simultaneous recognition of multiple people and performance improvement for a larger group of subjects, by using multiple sensor nodes.

Acknowledgment

The authors would like to express their gratitude to all the volunteers taking part in the experiments for their enthusiasm and patience. The authors also acknowledge the support of the Army Research Office through the grant DAAD 19-03-1-03552, and partial support from the National Science Council under the grant NSC 95-2221-E-009-294.

References and links

1. A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Trans. Circuits syst. Video Technol. 14,4–20 (2004). [CrossRef]  

2. V. Spitzer, M. Ackerman, A. Scherzinger, and D. Whitlock, “The visible human male: A technical report,” J. Am. Med. Assoc. 3,118–130 (1996). [CrossRef]  

3. N. Kakuta, S. Yokoyama, and M. Nakamura, “Estimation of radiative heat transfer using a geometric human model,” IEEE Trans. Biomed. Eng. 48,324–331 (2001). [CrossRef]   [PubMed]  

4. M. Planck, “On the law of distribution of energy in the normal spectrum,” Annalen der Physik 4, 533 ff (1901).

5. U. Gopinathan, D. J. Brady, and N. P. Pitsianis, “Coded apertures for efficient pyroelectric motion tracking,” Opt. Express. 11,2142–2152 (2003). [CrossRef]   [PubMed]  

6. A. S. Sekmen, M. Wilkes, and K. Kawamura, “An application of passive human-robot interaction: human tracking based on attention distraction,” IEEE Trans. Syst., Man Cybern. A 32,248–259 (2002). [CrossRef]  

7. Q. Hao, D. J. Brady, B. D. Guenther, J. Burchett, M. Shankar, and S. Feller, “Human tracking with wireless distributed radial pyroelectric sensors,” IEEE Sens. J. 06,1683–1694 (2006). [CrossRef]  

8. T. Hussian, A. Baig, T. Saadawi, and A. Ahmed “Infrared pyroelectric sensor for detection of vehicular traffic using digital signal processing techniques,” IEEE Trans. Veh. Technol. 44,683–689 (1995). [CrossRef]  

9. Glolab Corporation, “Infrared parts manual,” http://www.glolab.com/pirparts/infrared.html.

10. Fresnel Technologies Inc., http://www.fresneltech.com/arrays.html.

11. J. S. Fang, Q. Hao, D. J. Brady, M. Shankar, B. D. Guenther, N. P. Pitsianis, and K. Y. Hsu, “Path-dependent human identification using a pyroelectric infrared sensor and Fresnel lens arrays,” Opt. Express. 14,609–624, (2006). [CrossRef]   [PubMed]  

12. J. S. Fang, Q. Hao, Brady D. J., B. D. Guenther, and K. Y. Hsu, “Real-time human identification using a pyroelectric infrared detector array and hidden Markov models,” Opt. Express. 14,6643–6658 (2006). [CrossRef]   [PubMed]  

13. S. K. Schreyer, M. Bidinosti, and P. D. Wentzell, “Application of maximum likelihood principal components regression to fluorescence emission spectra,” Appl. Spectrosc. 56,789–796, (2002). [CrossRef]  

14. M. N. Leger and P. D. Wentzell, “Maximum likelihood principal components regression on wavelet-compressed data,” Appl. Spectrosc. 58,855–862, (2004). [CrossRef]   [PubMed]  

15. D. Green and J. Swets, Signal Detection Theory and Psychophysics (John Wiley and Sons, New York, 1989).

16. J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology 143,29–36 (1982). [PubMed]  

17. C. E. Metz, “Basic principles of ROC analysis,” Semin. Nucl. Med. 8,283–298 (1978). [CrossRef]   [PubMed]  

18. A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,” Pattern Recogn. 30,1145–1159, (1997). [CrossRef]  

19. D. J. Hand and R. J. Till, “A simple generalisation of the area under the ROC curve for multiple class classification problems,” Mach. Learn. 45,171–186, (2001). [CrossRef]  

20. M. Faundez-Zanuy and E. Monte-Moreno, “State-of-the-art in speaker recognition,” IEEE Aerosp. Electron. Syst. Mag 20,7–12 (2005 [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. The diagram of the recognition process
Fig. 2.
Fig. 2. A sensor module (including a PIR detector, a Fresnel lens arrary, Texas Instrument micro-controller (MSP430149) and RF transceiver (TRF6901) module)
Fig. 3.
Fig. 3. The experiment setup for real-time walker recognition.
Fig. 4.
Fig. 4. Flow chart of real-time feature extraction.
Fig. 5.
Fig. 5. Event window detection from windowed power spectrum density of sensory data. (a) Raw data. (b) WPSD of the raw data. (c) Digitized signals. (d) Event windows.
Fig. 6.
Fig. 6. Two event data sets generated by two different individuals.
Fig. 7.
Fig. 7. The spectral features for two different individuals derived from the event data in Fig. 6.
Fig. 8.
Fig. 8. Each column is for different speed levels. Each subfigure contains 40 superimposed data sets.
Fig. 9.
Fig. 9. Results for leave-one-out cross-validation of calibration data
Fig. 10.
Fig. 10. The supervised clustering results upon 3 labels for 120 data sets with contours of the probability density distributions. (a) From Jason“s training data. (b) From Bob’s training data.
Fig. 11.
Fig. 11. The RMSEP results for different value of additive noise.
Fig. 12.
Fig. 12. ROC curves of four registered people

Tables (2)

Tables Icon

Table 1. Summary of parameters of PIR detector

Tables Icon

Table 2. The recognition results of 4 registered and 6 unregistered subjects. During the experiment, each subject walks 20 rounds along a fixed path. The detection of unregistered subject yields a report of “Others”.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

M = F R .
M = F ˜ R + ε ,
F m × n = U m × m m × n V n × n T ,
F F ˜ k = U ˜ m × k ˜ k × k V ˜ n × k T ,
F PJ T ,
M m × 1 = P m × k R ̂ k × 1 + ε ̂ m × 1 .
Q = 1 ( 2 π ) m / 2 C ε ̂ 1 / 2 exp [ 1 2 ( M P R ̂ ) T C ε ̂ 1 ( M P R ̂ ) ] ,
Q ' = ( M P R ̂ ) T C ε ̂ 1 ( M P R ̂ ) .
R ̂ [ ( M P R ̂ ) T C ε ̂ 1 ( M P R ̂ ) ] | R ̂ = R ̂ MLE = 0 .
P T C ε ̂ 1 ( M P R ̂ MLE ) = 0 .
R ̂ MLE = ( P T C ε ̂ 1 P ) 1 P T C ε ̂ 1 M .
R n × 1 = J n × k R ̂ k × 1 , MLE
= V ˜ n × k ( P T C ε ̂ 1 P ) 1 P T C ε ̂ 1 M
= V ˜ n × k [ ( U ˜ m × k ˜ k × k ) T C ε ̂ 1 ( U ˜ m × k ˜ k × k ) ] 1 ( U ˜ m × k ˜ k × k ) T C ε ̂ 1 M m × 1 .
μ = W ¯ = l = 1 n W 1 n ,
C = [ cov ( p , p ) cov ( p , q ) cov ( q , p ) cov ( q , q ) ] .
cov ( X , Y ) = l = 1 n ( X l X ¯ ) ( Y l Y ¯ ) ( n 1 ) .
m { H 0 , if max i { p ( m H i ) } < γ i H i : i = arg max i { p ( m H i ) } otherwise ,
p ( m H i ) = 1 ( 2 π ) C i 1 / 2 exp [ 1 2 ( m μ i ) T C i 1 ( m μ i ) ] .
RMSEP = i = 1 n ( M i M ̂ i ) 2 n = i = 1 n e i 2 n ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.