Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient subpixel registration for polarization-modulated 3D imaging

Open Access Open Access

Abstract

To implement real-time 3D reconstruction and displaying for polarization-modulated 3D imaging lidar system, an efficient subpixel registration based on maximum principal component analysis (MPCA) is proposed in this paper. With which only the maximum principal component is estimated to identify non-integer translations in spatial domain while other principal components affected by noise are ignored. Consequently, robustness and stability of the subpixel registration is implemented in presence of noise, while computational complexity is reduced and memory size is saved simultaneously, especially when the image size is large. Both simulated and real polarization-modulated images are used to verify the proposed method. Simulation results show that 0.01 pixels of the registration accuracy are implemented; meanwhile, experimental results show that the proposed method can effectively reconstruct the depth image in real world application.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Polarization-modulated 3D imaging lidar can provide high-resolution (1024 × 1024 pixels or even higher) while simplify data acquisition for long-range applications [1–3], in which dual CCD cameras are utilized to accumulate the returned-light whose polarization state is modulated by an electro-optic modulator (EOM). With dual CCD cameras structure, a depth image is reconstructed from a frame of polarization-modulated images (including a cos2-modulated image and a sin2-modulated image). This advantage makes the polarization-modulated 3D imaging system act as a flash 3D lidar, which enhances the performance of dynamic 3D imaging, regardless of objects motion or platform motion.

To reconstruct a depth image from the polarization-modulated images, image registration is required to align the pixels of one image to corresponding pixels of another. In general, the image registration methods are classified into two major categories, including feature-based method and area-based method [4, 5]. Specifically, the feature-based method performs pixel-level registration while the area-based method performs subpixel registration. Although pixel-level registration may be adequate in many applications, accurate 3D reconstruction in 3D imaging lidar has introduced requirement for subpixel registration [6]. Furthermore, the gray values in two polarization-modulated images to be registered are complementary (e.g. an object is displayed with a bright representation in the cos2-modulated image, but with a dark representation in the sin2-modulated image) [2, 3]. In this case, salient features may be inadequate for the feature-based method to perform image registration in the polarization-modulated 3D imaging lidar system. As a result, area-based method plays an important role in such lidar system for image registration.

Cross correlation and phase correlation, which deal with the images without attempting to detect salient features, are the most common area-based methods for subpixel registration [4–8]. In more detail, the cross correlation method matches image intensity directly without any structural analysis, and therefore it is sensitive to intensity changes affected by noise, varying illumination, or different sensors [4, 9]. To implement subpixel accuracy in this method, interpolation of measured values needs to be used to identify non-integer translations in spatial domain. However, interpolation operation will suffer from high computational complexity. By means of finding cross correlation peak within a small fraction of a pixel, subpixel registration can also be achieved with the same accuracy as traditional FFT up-sampling or better but with greatly reduced computational complexity and memory requirement [10]. However, the cross correlation peak is also interfered by noise. Non-integer translations between two similar images will cause the peak to spread across neighboring pixels, subsequently degrading quality of the translation estimation [11]. Alternatively, the phase correlation method is able to work directly in Fourier domain to identify translations from the phase correlation matrix (PCM) according to the fact that phase shift is a linear function of translation parameters [12]. Therefore, the translations are solved easily by performing least-squares fitting (LSF) to the linear phase in Fourier domain. It's well known that drawbacks of existing subpixel registration methods lie in their sensitivity to noise and aliasing. As mentioned in [13, 14], the noise and aliasing are mainly localized at high frequency in Fourier domain. To eliminate the noise and aliasing, high frequency components lying outside of a radius of 0.6N/2 (where N is the minimum number of samples in height and width dimensions) are removed from the phase correlation matrix, which not only yields superior registration accuracy in presence of noise and aliasing, but also reduces computational complexity efficiently [13]. However, it should be noticed that the noise in image exists in both high frequency components and low frequency components, and therefore it may be not enough to remove high frequency components from the phase correlation matrix. Based on the fact that noise-free model for the phase correlation matrix is actually a rank-one matrix, singular value decomposition (SVD) is performed to separate the phase correlation matrix into height and width displacement estimation [11, 15], which will reduce phase unwrapping of the data to one dimensional subspace and therefore make this method robust and stable in presence of noise. Actually, only one eigenvector with maximum eigenvalue is used to produce the linear phase, while other eigenvectors are abandoned ultimately. In this case, solving all eigenvectors and eigenvalues with the SVD method is redundant and unnecessary, which will additionally increase the computational burden.

To implement real-time 3D reconstruction for the polarization-modulated 3D imaging lidar, an efficient subpixel registration using maximum principal component analysis (MPCA) is proposed in this paper. With this method, only the maximum principal component is estimated while other principal components are ignored. As a result, computational complexity is reduced and memory size is saved significantly, especially when the image size is large. Although other principal components are ignored, accuracy of the subpixel registration is not degraded at all, which will enhance accurate 3D reconstruction in online form.

2. Principal of operation

2.1 Polarization-modulated 3D imaging

The polarization-modulated 3D imaging lidar system is shown in Fig. 1. A linear polarizer (P1, parallel to the emitted linear-polarized light) is required to prepare linear-polarized light for electro-optic modulator (EOM). The first electro-optic modulator (EOM1) manipulates polarization state of the returned-light to perform time-resolved imaging. Besides, the second electro-optic modulator (EOM2) along with a linear polarizer (P2, perpendicular to P1) is used in channel X. They are placed between the polarization beam splitter (PBS) and the EMCCDx camera to act as a fast shutter, which will determine when and how long the p-polarized light passes through channel X. In contrast, additional devices are unnecessary in channel Y because EOM1 and PBS can take this work completely to determine when and how long the s-polarized light passes through channel Y. Finally, range-gated 3D imaging is implemented and the gate opening range spans from RBase to (RBase + L).

 figure: Fig. 1

Fig. 1 Schematic diagram of polarization-modulated 3D imaging. Here, the electro-optic modulators manipulate polarization state of returned-light to perform range-gated imaging; meanwhile, dual EMCCD cameras acquire polarization-modulated images for 3D reconstruction in channel X and Y, respectively.

Download Full Size | PDF

To implement range-gated 3D imaging, a ramp voltage V1(t) is applied to the first electro-optic modulator (EOM1) and a square-wave voltage V2(t) is applied to the second electro-optic modulator (EOM2). When the ramp voltage V1(t) is applied to EOM1 in direction of propagation, phase retardation θ between the ordinary wave and the extraordinary wave will take place, which is proportional to V1(t) [16].

θ=πVπV1(t),(0V1(t)Vπ)
where Vπ is the half wave voltage of the crystal. Based on the fact that objects located at different ranges will result in different roundtrip times, the phase retardation is rewritten as a function related to the rangeR.
θ=πL(RRBase),(RBaseRRBase+L)
where R is the range between the object and the lidar system, RBase is the base range and L is the gate opening range (see Fig. 1). With beam polarization splitting, p-polarized and s-polarized components are separated into channel X and Y, whose intensities are modulated with cos2(θ/2) and sin2(θ/2) function, respectively [17].
{IX=IRECcos2(π2·RRBaseL)IY=IRECsin2(π2·RRBaseL),(RBaseRRBase+L)
where IREC is the intensity of returned-light, IX and IY are the intensities of p-polarized and s-polarized components, respectively. According to Eq. (3), polarization-modulated images derived by the EMCCDx and EMCCDy cameras contain range information, and therefore the range R between the object and the lidar system is given by

R=RBase+2Lπarctan(IYIX)

Besides, summation of IX and IY will produce a polarization-demodulated image (conventional intensity image) as follows.

IREC=IX+IY

It should be noticed that Eq. (4) and Eq. (5) are established only if the pixels of one polarization-modulated image align to corresponding pixels of another. Therefore, image registration is required to reconstruct a depth image and an intensity image from the two polarization-modulated images. According to Eq. (3), near-range object is modulated into a bright representation while far-range object is modulated into a dark representation in EMCCDx. The opposite situation will happen in EMCCDy, and therefore near-range object is modulated into a dark representation while far-range object is modulated into a bright representation. That is to say, gray values between the distorted image (acquired by EMCCDx) and the reference image (acquired by EMCCDy) are complementary, which makes the feature-based registration difficult to work in the polarization-modulated 3D imaging lidar system. As a result, area-based registration is considered to be applied in our system, and more importantly, the area-based registration can register the polarization-modulated images better than one pixel despite misalignment of the EMCCD cameras pair, which is helpful to enhance accurate 3D reconstruction.

2.2 Maximum principal component analysis for subpixel registration

To implement real-time 3D reconstruction, an efficient subpixel registration method named maximum principal component analysis (MPCA) is proposed in this paper. Compared with the SVD method, the proposed method estimates only the maximum principal component to identify non-integer translations during subpixel registration while other principal components are ignored. As a result, computational complexity is reduced and memory size is saved simultaneously, especially when the image size is large. Assuming that two image functions related by translations h0 and w0 in height and width dimensions are denoted as f(h,w) and g(h,w) in space domain; meanwhile, F(u,v) and G(u,v) are the corresponding fast Fourier transform (FFT) of f(h,w) andg(h,w), respectively. Thus, the phase correlation matrix (PCM) Q(u,v) is given by [4,6]

Q(u,v)=F(u,v)G(u,v)*|F(u,v)G(u,v)*|
where u=0,1,H1 andv=0,1,W1; G(u,v)*represents the complex conjugate of G(u,v).

Although the phase correlation matrix (PCM) can be used to perform phase unwrapping directly (PCM method), noise and aliasing may cause phase unwrapping error too large to be tolerated. Hence, the frequency components separation technique is taken to remove the noise and aliasing at high frequency (low-pass filtering). Then, the MPCA method is performed to separate the low frequency components into one dimensional subspaces (the maximum principal component and its corresponding direction), which will further remove the noise at low frequency. Assuming that XD×N is the low frequency component of the phase correlation matrix (Here, XD×N is also referred to as a phase observed subspace. D and N are the dimensions of XD×N). Besides, assuming that Q(u,v) is derived from shifting zero-frequency components of Q(u,v) to the center of spectrum (u=0,1,H1 andv=0,1,W1). Then, the relationship between XD×N and Q(u,v) is given by

XD×N=Q(u,v)|HD2uH+D21,WN2vW+N21
where D=0.6H/2, N=0.6W/2, and is the downward rounding operation. Meanwhile, XD×N is governed by a linear transformation PD×K of the K-dimensional principal components subspace ZK×N plus additive Gaussian noise ε.
XD×N=PD×KZK×N+μXID×N+ε
where ID×N is a D × N matrix of ones; μX is the mean of all elements in the phase observed subspace XD×N and permits XD×N to have non-zero mean; ε is a Gaussian noise variable with covariance σ2ID×N such that ε~N(0,σ2ID×N), which is an independent random variable and uncorrelated to ZK×N. Here, the Expectation-Maximization (EM) algorithm is applied to estimate the first K principal components and their corresponding directions for the phase observed subspace XD×N [18, 19], which is given by
{ZK×N=RK×K1PD×KT(XD×NμXID×N)PD×K=(XD×NμXID×N)ZK×NT(ZK×NZK×NT)1XD×N=PD×KZK×N+μXID×N
where RK×K is defined by

RK×K=PD×KTPD×K+σ2IK×K

It is known from Eq. (9) that the phase observed subspace XD×N is decomposed into the first K (K<D) principal components and their corresponding directions. In other words, there exists a sparse matrix PD×K which acts as a linear transformation to compress the high dimensional matrix XD×N into a sparse one (ZK×N). This transformation minimizes the noise and redundancy while maximizes the signal. Especially, when K is set to 1 (K = 1), XD×N will be projected onto two vectors: the maximum principal component Z1×N and its corresponding directionPD×1. This can be done with low computational complexity with only O(DN), while the complexity of computing full singular value decomposition for a matrix of size D × N is O(DN2). Since the matrix of size D × N is represented by a D-dimensional vector and a N-dimensional vector, the memory size is only (D + N) in the MPCA method, while that in the SVD method is more than (D2 + N2). Significantly, the MPCA method is more efficient than the SVD method, with respect to the computational complexity and the memory size.

Finally, the translations h0 and w0 in height and width dimensions are solved by performing phase unwrapping on the maximum principal component Z1×Nand its corresponding directionPD×1, respectively. Therefore, subpixel registration is implemented with the proposed MPCA method.

3. Simulation work

A simulation system based on polarization-modulated 3D imaging is established to evaluate the performance of proposed method, in which two polarization-modulated images are designed by simulation software. Arbitrary translations between the two polarization-modulated images are pre-produced by performing spline interpolation in the image domain to simulate installation misalignment of dual EMCCD cameras, and then a distorted image and a reference image are produced in the simulation system. In this case, true translations between the distorted image and reference image are previously known, which are defined as htrue and wtrue in height and width dimensions, respectively. Real translations, by contrast, are estimated by performing image registration, which are defined as hreal and wreal in height and width dimensions, respectively. Consequently, registration accuracies are considered to be differences between the real translations and the true translations, which are given by Δh=hrealhtrue andΔw=wrealwtrue.

When applying fast Fourier transform (FFT) to the distorted and reference images simultaneously, the phase correlation matrix is easily obtained from Eq. (6). To remove noise and aliasing at high frequency, we take the frequency components separation technique presented in [11], and therefore only the phase observed subspace is utilized in the linear phase determination. In order to identify the non-integer translations between the distorted image and the reference image, several steps need to be implemented: Firstly, mask out the phase observed subspace in the phase correlation matrix. Secondly, perform the MPCA method on the phase observed subspace so as to reduce the phase correlation matrix to one-dimensional subspaces (including a maximum principal component Z1×N and its corresponding directionPD×1). Thirdly, perform phase unwrapping on the one-dimensional subspaces. Finally, fit the linear phase and estimate non-integer translations from the maximum principal component Z1×N and its corresponding directionPD×1.

The distorted image (cos2-modulated image) and the reference image (sin2-modulated image) are shown in Figs. 2(a) and 2(b), respectively. They are a pair of characteristic images in which objects appearing brighter in one image are darker in the other, and represent the extreme situation of the polarization-modulated 3D imaging system. Polarization-demodulated images that derived from summation of the cos2 and sin2 modulated images are shown in Figs. 2(c) and 2(d), where feature-based registration and area-based registration are implemented, respectively. It’s clear that the representation about ObjA and ObjB in Fig. 2(c) introduce an obvious distortion, while that in Fig. 2(d) does not show any distortion at all. In fact, salient features are inadequate in the polarization-modulated images to implement feature-based registration, which causes the feature-based registration to be unsuitable for the polarization-modulated 3D imaging lidar system.

 figure: Fig. 2

Fig. 2 The polarization-modulated images and demodulated images in the simulation system. (a) distorted image (cos2-modulated image); (b) reference image (sin2-modulated image); (c) polarization-demodulated image with feature-based registration; (d) polarization-demodulated image with area-based registration.

Download Full Size | PDF

Both the distorted image and the reference image are unavoidably affected by various noise, such as dark current noise, readout noise and background noise [20, 21], which may degrade accuracy of subpixel registration and subsequently affect quality of 3D reconstruction. To compare robustness and stability of the PCM, SVD and MPCA registration method, white Gaussian noises (WGN) with different levels are added to the distorted image and the reference image. When the WGN with 𝒩(0, 0.072) is added to the images, registration accuracies of three methods are much less than 1 pixel. However, when the WGN with 𝒩(0, 0.152) is added, registration accuracy of the PCM method is larger than 1 pixel while those of the SVD and MPCA methods are still much smaller than 1 pixel. Consequently, it is concluded that registration accuracies of the SVD and MPCA methods outperform that of the PCM method. The results of image registration are shown in Table 1.

Tables Icon

Table 1. Performance evaluation among the PCM, SVD and MPCA registration methods

Results of 3D reconstruction with high noise level are shown in Fig. 3, the depth images (RPCM, RSVD and RMPCA) are reconstructed by performing the PCM, SVD and MPCA registration methods, respectively. To compare the depth images RPCM, RSVD and RMPCA more clearly, subtraction operations between real depth image (RPCM, RSVD and RMPCA) and true depth image (RTRUE) are performed, and the results are shown in Figs. 3(d)-3(f). It is found from Fig. 3 that range accuracy decreases significantly on the left sides of ObjA and ObjB in the depth image RPCM while those of the depth images RSVD and RMPCA still keep well. The main reason is that the quality of 3D reconstruction degrades accordingly as the registration accuracy decreasing.

 figure: Fig. 3

Fig. 3 Results of 3D reconstruction with high noise level. (a), (b) and (c) are the depth images represented by RPCM, RSVD and RMPCA, which are reconstructed by performing the PCM, SVD and MPCA registration methods, respectively. Besides, (d), (e) and (f) are the range error images derived by performing (RPCM-RTRUE), (RSVD-RTRUE) and (RMPCA-RTRUE) operations.

Download Full Size | PDF

Mean value and standard deviation of the range error images (RPCM-RTRUE, RSVD-RTRUE and RMPCA-RTRUE) are used to evaluate the quality of 3D reconstruction. In detail, the mean value is 0.32m, 0.24m and 0.24m while the standard deviation is 2.07m, 0.85m and 0.85m, respectively. It’s evident that the range accuracy of the MPCA or SVD method can outperform that of the PCM method.

4. Experimental results and discussion

As shown in Fig. 4, an experimental system is established for polarization-modulated 3D imaging in this paper, which works in “flash” mode, and therefore a depth image will be reconstructed in single pulsed cycle. During the gate opening time, dual EMCCDs with a resolution of 1024 × 1024 pixels are applied to accumulate the returned-lights in channel X and Y, respectively. Here, the electro-optic modulators play two important roles, one of which is a fast shutter for the EMCCD camera, the other is a polarization-modulated device for time-resolved imaging.

 figure: Fig. 4

Fig. 4 The experimental system for polarization-modulated 3D imaging.

Download Full Size | PDF

Based on the dual EMCCDs structure, 3D reconstruction is accomplished instantaneously, which enhances the performance of dynamic imaging, regardless of objects motion or platform motion. To obtain sufficient signal-to-noise ratio (SNR), a high-energy laser device is used for our flash 3D imaging system. Meanwhile, deeply cooling the EMCCD chip and setting an appropriate electron multiplying gain will enhance sensitivity of the EMCCD camera for low-light-level applications. The main parameters for the 3D imaging lidar system are shown in Table 2.

Tables Icon

Table 2. System parameters for polarization-modulated 3D imaging.

The cos2-modulated image and the sin2-modulated image acquired by EMCCD cameras are shown in Figs. 5(a) and 5(b), respectively. Similar to Section 3, the polarization-demodulated images are shown in Figs. 5(c) and 5(d), where feature-based registration and area-based registration are implemented, respectively. Significantly, the feature-based registration is abortive because it’s sensitive to intensity changes. However, the proposed method can be well adapted to this situation.

 figure: Fig. 5

Fig. 5 The polarization-modulated images and demodulated images in the experimental system. (a) cos2-modulated image; (b) sin2-modulated image; (c) polarization-demodulated image with feature-based registration and (d) polarization-demodulated image with area-based registration.

Download Full Size | PDF

Experimental results are shown in Fig. 6. There are two towers in the scene, one of which is located behind the other. As a result, the front tower is seen clearly while the back tower is hidden by the front one, leaving only portions of crisscross guardrails. The gray image is obtained from a conventional CCD camera during the day-time. With the proposed MPCA method, two polarization-modulated images derived from EMCCDx and EMCCDy are registered correctly, even though their gray values are complementary due to polarization modulation. Afterwards, a depth image is reconstructed from the polarization-modulated images. The colorbar represents the gate opening range and the color corresponds to the range between the lidar system and the objects. It’s seen from the depth image that the crisscross guardrails at different positions can be distinguished clearly, ranging approximately from 960m to 975m.

 figure: Fig. 6

Fig. 6 The towers’ 3D structure derived from the polarization-modulated 3D imaging system. (a) The gray image obtained from conventional CCD camera during day-time; (b) the depth image, corresponding to the designated area in gray image, is reconstructed from the two polarization-modulated images.

Download Full Size | PDF

5. Conclusion

The efficient subpixel registration method based on maximum principal component analysis (MPCA) estimates only the maximum principal component to identify non-integer translations in spatial domain, rather than utilizing all principal components. This approach improves the robustness and stability of subpixel image registration in presence of noise. Furthermore, it reduces computational complexity and saves memory size simultaneously, especially when the image size is large. These superiorities make it be suitable for online applications and enable real-time reconstruction and displaying for the polarization-modulated 3D imaging lidar system. A simulation system is established to evaluate accuracy of subpixel registration and quality of 3D reconstruction, and an experimental system is established to verify the proposed method in real world application, Experimental results show that the proposed method can effectively reconstruct the depth image in the polarization-modulated 3D imaging lidar system.

References

1. P. F. McManamon, “Review of ladar: a historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng. 51(6), 060901 (2012). [CrossRef]  

2. Z. Chen, B. Liu, E. Liu, and Z. Peng, “Electro-optic modulation methods in range-gated active imaging,” Appl. Opt. 55(3), A184–A190 (2016). [CrossRef]   [PubMed]  

3. Z. Chen, B. Liu, E. Liu, and Z. Peng, “Adaptive polarization-modulated method for high resolution 3D imaging,” IEEE Photonics Technol. Lett. 28(3), 295–298 (2016). [CrossRef]  

4. B. Zitova and J. Flusser, “Image registration methods: a survey,” Image Vis. Comput. 21(11), 977–1000 (2003). [CrossRef]  

5. J. Ma, H. Zhou, J. Zhao, Y. Gao, J. Jiang, and J. Tian, “Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming,” IEEE Trans. Geosci. Remote Sens. 53(12), 6469–6481 (2015). [CrossRef]  

6. H. Foroosh, J. B. Zerubia, and M. Berthod, “Extension of Phase Correlation to Subpixel Registration,” IEEE Trans. Image Process. 11(3), 188–200 (2002). [CrossRef]   [PubMed]  

7. P. Thévenaz, U. E. Ruttimann, and M. Unser, “A pyramid approach to subpixel registration based on intensity,” IEEE Trans. Image Process. 7(1), 27–41 (1998). [CrossRef]   [PubMed]  

8. W. Tong, “Subpixel image registration with reduced bias,” Opt. Lett. 36(5), 763–765 (2011). [CrossRef]   [PubMed]  

9. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, in Digital Image Processing Using MATLAB, 2nd ed. (McGraw-Hill, 2011).

10. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]   [PubMed]  

11. W. S. Hoge, “A subspace identification extension to the phase correlation method,” IEEE Trans. Med. Imaging 22(2), 277–280 (2003). [CrossRef]   [PubMed]  

12. H. Foroosh and M. Balci, “Sub-pixel registration and estimation of local shifts directly in the Fourier domain,” in International Conf. Image Proces. (ICIP) (2004). [CrossRef]  

13. H. S. Stone, M. T. Orchard, E. Chang, and S. A. Martucci, “A fast direct Fourier-based algorithm for subpixel registration of images,” IEEE Trans. Geosci. Remote Sens. 39(10), 2235–2243 (2001). [CrossRef]  

14. P. Vandewalle, S. Susstrunk, and M. Vetterli, “A frequency domain approach to registration of aliased images with application to super-resolution,” EURASIP J. Appl. Signal Process. 71459, 1–14 (2006).

15. X. Tong, Z. Ye, Y. Xu, S. Liu, L. Li, H. Xie, and T. Li, “A novel subpixel phase correlation method using singular value Decomposition and Unified Random Sample Consensus,” IEEE Trans. Geosci. Remote Sens. 53(8), 4143–4156 (2015). [CrossRef]  

16. A. Yariv and P. Yeh, “Electro-optic Devices,” in Optical waves in crystals: Propagation and control of laser radiation, 1st ed. (Wiley, 2002).

17. M. Born and E. Wolf, “Optics of crystals,” in Principles of Optics, 7th (expanded) ed. (Cambridge U., 1999).

18. C. M. Bishop, in Pattern Recognition and Machine Learning (Springer, 2006).

19. S. Roweis, “EM algorithms for PCA and SPCA,” Adv. Neural Inf. Process. Syst. 1500, 626 (1998).

20. D. Dussault and P. Hoess, “Noise performance comparison of ICCD with CCD and EMCCD cameras,” Proc. SPIE 5563, 195–204 (2004). [CrossRef]  

21. M. S. Robbins and B. J. Hadwen, “The noise performance of electron multiplying charge-coupled devices,” IEEE Trans. Electron Devices 50(5), 1227–1232 (2003).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Schematic diagram of polarization-modulated 3D imaging. Here, the electro-optic modulators manipulate polarization state of returned-light to perform range-gated imaging; meanwhile, dual EMCCD cameras acquire polarization-modulated images for 3D reconstruction in channel X and Y, respectively.
Fig. 2
Fig. 2 The polarization-modulated images and demodulated images in the simulation system. (a) distorted image (cos2-modulated image); (b) reference image (sin2-modulated image); (c) polarization-demodulated image with feature-based registration; (d) polarization-demodulated image with area-based registration.
Fig. 3
Fig. 3 Results of 3D reconstruction with high noise level. (a), (b) and (c) are the depth images represented by RPCM, RSVD and RMPCA, which are reconstructed by performing the PCM, SVD and MPCA registration methods, respectively. Besides, (d), (e) and (f) are the range error images derived by performing (RPCM-RTRUE), (RSVD-RTRUE) and (RMPCA-RTRUE) operations.
Fig. 4
Fig. 4 The experimental system for polarization-modulated 3D imaging.
Fig. 5
Fig. 5 The polarization-modulated images and demodulated images in the experimental system. (a) cos2-modulated image; (b) sin2-modulated image; (c) polarization-demodulated image with feature-based registration and (d) polarization-demodulated image with area-based registration.
Fig. 6
Fig. 6 The towers’ 3D structure derived from the polarization-modulated 3D imaging system. (a) The gray image obtained from conventional CCD camera during day-time; (b) the depth image, corresponding to the designated area in gray image, is reconstructed from the two polarization-modulated images.

Tables (2)

Tables Icon

Table 1 Performance evaluation among the PCM, SVD and MPCA registration methods

Tables Icon

Table 2 System parameters for polarization-modulated 3D imaging.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

θ= π V π V 1 ( t ),( 0 V 1 ( t ) V π )
θ= π L ( R R Base ),( R Base R R Base +L )
{ I X = I REC cos 2 ( π 2 · R R Base L ) I Y = I REC sin 2 ( π 2 · R R Base L ) ,( R Base R R Base +L )
R= R Base + 2L π arctan( I Y I X )
I REC = I X + I Y
Q( u,v )= F( u,v )G ( u,v ) * | F( u,v )G ( u,v ) * |
X D×N = Q ( u , v )| HD 2 u H+D 2 1, WN 2 v W+N 2 1
X D×N = P D×K Z K×N + μ X I D×N +ε
{ Z K×N = R K×K 1 P D×K T ( X D×N μ X I D×N ) P D×K =( X D×N μ X I D×N ) Z K×N T ( Z K×N Z K×N T ) 1 X D×N = P D×K Z K×N + μ X I D×N
R K×K = P D×K T P D×K + σ 2 I K×K
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.