Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ray calibration and phase mapping for structured-light-field 3D reconstruction

Open Access Open Access

Abstract

In previous work, we presented a structured light field (SLF) method combining light field imaging with structured illumination to perform multi-view depth measurement. However, the previous work just accomplishes depth rather than 3D reconstruction. In this paper, we propose a novel active method involving ray calibration and phase mapping, to achieve SLF 3D reconstruction. We performed the ray calibration for the first time to determine each light field ray with metric spatio-angular parameters, making the SLF realize multi-view 3D reconstruction. Based on the ray parametric equation, we further derived the phase mapping in the SLF that spatial coordinates can be directly mapped from phase. A flexible calibration strategy was correspondently designed to determine mapping coefficients for each light field ray, achieving high-efficiency SLF 3D reconstruction. Experimental results demonstrated that the proposed method was suitable for high-efficiency multi-view 3D reconstruction in the SLF.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light field imaging is an advanced technology to simultaneously record spatio-angular information of a light ray, which has been recently developed for both scientific research and industrial application [1–7]. Traditional imaging integrates a light beam emitting from an object point and passing through a main lens on a pixel area of a sensor, causing ambiguity in distinguishing among the light rays. To distinguish the direction of light ray, one can use camera array [8], or single camera by introducing optical modulation components (e.g., microlens array [9], absorbing mask [10]). Then light field data can be obtained to enable post-processing imaging, for instance, digitally refocusing in different depths and shifting in different viewpoints. The light field data contain information about the three-dimensional (3D) structure of a scene, allowing us to estimate the scene depth.

Techniques commonly used for depth estimation based on the light field data are disparity and blur [11–21]. The former extracts at least two images with various viewpoints from the light field data and then calculates disparity maps by matching features among those images. The latter utilizes the light field data to focus on different depths to obtain focal stacks and then estimates the blurring kernel (similar to depth from defocus) or the focusing degree (similar to depth from focus). For instance, Tao et al. combined disparity and defocus cues obtained by computing the light field epipoplar image along different directions to estimate the scene depth [16]. Hanne et al. traced and intersected a pair of light rays to predict the distance to a refocused object plane [20]. Chen et al. derived a geometric optical model based on on-axis point light sources to measure the distances of object planes [21]. These techniques using depth cues or ray tracing models can be classified as passive methods, which are dependent on scene features, such as color, texture, illumination, etc., achieving single-shot depth sensing without any auxiliary equipment. Unfortunately, the passive method lacks robustness in complex scenes with occlusion, discontinuous depth, repeat texture, and diverse illumination, which significantly impact the quality of depth estimation.

To address the passive method’s intrinsic problems, in our previous work [22] we presented an active method combining light field imaging with structured illumination [23, 24] to record a structured light field (SLF). In the SLF, each ray represents a specific relationship of the scene depth and the modulated phase, so that multi-view depth measurement can be achieved for high-quality 3D imaging. However, so far in the passive and active methods, the transverse dimensions are still absent; thus, they cannot realize “real” light-field 3D reconstruction.

In this paper, we propose a novel active method to achieve SLF 3D reconstruction that all dimensional coordinates of an object point can be reconstruct in different rays’ directions. The major contributions lie in the following aspects. First is ray calibration; we utilize an auxiliary camera to determine each light field ray with metric spatio-angular parameters, enabling the SLF to achieve precise multi-view 3D reconstruction through ray intersection of the recorded ray and the corresponding projected rays obtained by orthogonal fringe projection. Second is phase mapping; based on ray parametric equation and camera imaging model, we further prove that spatial coordinates of object points can be directly mapped from phases in the SLF, intrinsically requiring only single-direction fringe projection. Third is coefficient calibration; we then design a flexible calibration strategy to determine each ray with specific phase-mapping coefficients by using the calibrated spatio-angular parameters, so that high-efficiency SLF 3D reconstruction can be performed by means of the phase mapping. Finally, the proposed method was demonstrated via experimental results and analysis that it is suitable for high-efficiency multi-view 3D reconstruction in the SLF.

2. Mathematical models

We first give a brief introduction and parameterization of the light field. The light field parameters will be determined via ray calibration in Section 3 and are employed to derive the phase-mapping relationship in Section 4. The ray intersection in Section 3 and the phase-mapping derivation in Section 4 are both related to the camera imaging process, so the camera imaging model is also introduced here in Section 2.

2.1 Light field parameterization

The term light field was first defined to describe the radiometric properties of light ray in space by Gershun in 1939 [25]. In 1991 Adelson and Bergen introduced plenoptic function to represent the spatial light ray [26]. Soon afterwards Levoy reduced the dimensions of plenoptic function to 4D light field [1], which has been widely adopted for its conciseness. The 4D light field can be parameterized through intersection of a ray with two parallel planes.

Another convenient way to parameterize the light field is let a ray pass through a plane to express spatial position and direction of this ray, as shown in Fig. 1. A light field coordinate system (LFCS) can be defined by selecting the plane and its normal direction to be XfYf plane and Zf axis, respectively. So the light field can be represented as L(af,bf,θf,φf), where L denotes the radiance intensity along a ray lf, (af,bf) denotes the coordinates of intersection point, and θf and φf are the included angles between the Zf axis and two projected lines of the ray in the XfZf plane and the YfZf plane, respectively. For simplicity, in this paper we adopt slops of θf and φf, i.e., θftan(θf) and φftan(φf).

 figure: Fig. 1

Fig. 1 Schematic diagram of 4D light field.

Download Full Size | PDF

2.2 Camera imaging model

A camera image is formed by projecting an object point onto an image plane through a projection center, as shown in Fig. 2. The object point can be denoted as Xw=(Xw,Yw,Zw)T in a world coordinate system (WCS) and Xc=(Xc,Yc,Zc)T in a camera coordinate system (CCS), respectively. Generally speaking, lenses used for camera imaging inevitably cause lens distortion, so the complete camera imaging model involving lens distortion can be represented as [34]

{Xc=RcXw+tcλcx˜c=[I|0]X˜cxc=xcΔ(xc;kc)m˜c=Kcx˜c
where ˜ denotes a homogeneous coordinate; Rc is a rotation matrix corresponding to a rotation vector rc, and tc is a translation vector, representing transformation from the WCS to the CCS; λc denotes a scale factor; xc=(xc,yc)T=(Xc/Zc,Yc/Zc)T is the projected image point, which is also denoted as mc in an image coordinate system; xc is a distorted image point related to xc; Δ(xc) is a corresponding distortion correction term; kc=(k1,k2,k3,p1,p2)T are lens distortion parameters associated with radial and decentering distortions; and Kc is a camera parameter matrix related to equivalent focus lengths (fu,fv)T along two image coordinate axes, projected image point of the projection center, i.e., principle point (u0,v0)T, and skew factor γ.

 figure: Fig. 2

Fig. 2 Schematic diagram of camera imaging process.

Download Full Size | PDF

3. Light field ray calibration

The SLF system consists of a projector and a light field device, e.g., a light field camera, as shown in Fig. 3. Structured light is projected onto objects, and then an SLF is recorded. In the SLF, each ray carries information of direction and phase. If a projected ray emitting to an object point and corresponding recorded rays scattering from this object point can be determined, this object point can be reconstructed in different rays’ directions by intersecting the projected ray and each corresponding recorded ray, respectively, achieving multi-view 3D reconstruction in the SLF.

 figure: Fig. 3

Fig. 3 Ray intersection for multi-view 3D reconstruction in the SLF.

Download Full Size | PDF

Since cross absolute phases can provide correspondence between the camera and the projector, the projector can be treated as a reverse camera [27]. Thus the projector can be calibrated using the mature camera imaging model, and then the projected ray can be determined from the corresponding projector image point identified through orthogonal fringe projection. According to the mechanism of light field imaging, a spatial light ray incidents upon a light field imaging system and integrates in a pixel unit on a detector to form an image point. So the recorded ray can be determined from the corresponding light field image point by calibrating the light field camera to transform pixel indices to metric dimensions. Although some light field camera models [28–30] have been presented, it is still a complicated and challenging task to accurately calibrate the light field camera.

In this paper, we proposed an alternative method to achieve light field ray calibration with the aid of an auxiliary camera. A 3D measurement system can be constituted by the auxiliary camera and the projector, then fringe projection profilemotry (FPP) [31–34] can be employed to reconstruct spatial coordinates of object points in a measuring volume, as illustrated in Fig. 4. If a target is placed in the measuring volume onto which the projector projects orthogonal fringe patterns, the auxiliary camera and the light field camera can simultaneously capture objective fringe images to compute cross absolute phase maps. For an object point lying on a recorded ray lf in the SLF, its homologous image points in the FPP system can be searched to calculate the corresponding spatial coordinates by making use of the cross absolute phase maps. Changing the orientations of the target relative to the light field camera, diverse object points on the ray lf can also be obtained in the same way. As a consequence, the ray lf can be determined with these reconstructed collinear spatial points.

 figure: Fig. 4

Fig. 4 Light field ray calibration.

Download Full Size | PDF

The light field ray calibration can be summarized as follow.

Step 1, calibrate the FPP system in terms of the camera imaging model in Eq. (1), and set the projector coordinate system (PCS) to be the WCS.

Step 2, fix a target (e.g., a white plate) in a common field of view of the FPP system and the SLF system, and use the FPP system with orthogonal fringe projection to identify and reconstruct a spatial point (denoted as Xp) on the ray lf.

Step 3, change the orientations of the target relative to the light field camera and repeat Step 2 to obtain a series of collinear spatial points Xpi, i=1,2,, and use these spatial points to determine the metric spatio-angular parameters (ap,bp,θp,φp) for the ray lf, satisfying with the following parametric equation that

{Xp=θpZp+apYp=φpZp+bp
where the subscript p means that the ray lf is parameterized by means of the XpYp plane in the PCS.

Step 4, establish a look-up-table (LUT) for the light field ray parameters, denoting it as LUT({lf|ap,bp,θp,φp}).

Geometrically, two spatial points are enough to work out four spatio-angular parameters. In practice, to reduce the influence of measurement error, more spatial points can be obtained to optimize the ray parameters. Therefore, the light field ray calibration can be represented such that

argap,bp,θp,φpminiZpi(θp,φp)T+(ap,bp)T(Xpi,Ypi)T2

4. Phase mapping in the SLF

According to different measurement principles, there are two kinds of method to achieve scene reconstruction using an SLF. One kind is our previous work in phase-depth mapping. In the SLF, a recorded ray is associated with two projected rays reflected from the reference plane and the surface of object, respectively. The scene depth relative to the reference plane is encoded in the phase difference between the two projected rays. In other words, the phase is modulated by the scene depth during fringe projection, so that the scene can be reconstructed by mapping the phase to the depth in the ray’s direction. However, the phase-depth mapping in the SLF can achieves only depth reconstruction, not 3D reconstruction. The other is SLF 3D reconstruction via the ray intersection with the aid of orthogonal fringe projection, as described in Section 3. Nevertheless, orthogonal fringe projection greatly increases the imaging time period, which significantly affects 3D measurement efficiency. Moreover, the phase information in orthogonal fringe projection plays only an auxiliary role; the coding mechanism between the phase and the 3D coordinates in the SLF has not yet been derived.

In the following section, we will derive the relationship of the phase and the 3D coordinates in the SLF. To accomplish this derivation, ray constraint in the SLF is explored first. It should be noted that although both of our previous work in [34] and the proposed method have attempted to establish the phase mapping relationship for 3D reconstruction, they were in terms of different imaging models. In [34], the perspective projection model was adopted for conventional cameras, while this model is not suitable for light field cameras any more. The phase mapping in the SLF can achieve multi-view 3D reconstruction from single-view image capture, while that in [34] can only achieve single-view 3D reconstruction. This special performance endows the SLF with some potential capabilities, such as high-dynamic-range 3D reconstruction, real-time phase encoding and decoding, and so on.

4.1 Ray constraint

In the SLF, object points on a recorded ray lf are encoded with different phase values in the phase-coding plane of the projector. Since the projector can be treated as a reverse camera, it is as though these object points are “imaged” to be corresponding image points of the projector. According to the linear invariance of the projective geometry [35], the recorded ray lf is related to a line lp in the phase-coding plane, which is demonstrated in Fig. 5 and is explained as follows.

 figure: Fig. 5

Fig. 5 Phase encoding in the SLF.

Download Full Size | PDF

In terms of the camera imaging model of Eq. (1), an image point xp of the projector is an projected point of an object point Xp such that

λpx˜p=Xp
Besides, the ray parametric equation of Eq. (2) can be transformed to be
λfθp+ap=Xp
by defining vectors θp=(θp,φp,1)T and ap=(ap,bp,0)T and a scale factor λf.

From Eqs. (4) and (5), an intrinsic relationship can be obtained such that (refer to Appendix)

x˜pT[ap]×θp=0
where []× represents an antisymmetric matrix of a vector. Equation (6) has such a connotation that in the SLF a recorded ray lf with spatio-angular parameters (ap,bp,θp,φp) defines a plane line lp=[ap]×θp satisfying with x˜pTlp=0. This indicates that the line lp passes through the image point xp and is in the phase-coding plane. Thus, Eq. (6) reveals the ray constraint relationship in the SLF.

4.2 Phase mapping

Next, we prove that there exist mapping from phase to 3D coordinates of a measured object point in the SLF based on the camera imaging model of Eq. (1) and the ray parametric equation of Eq. (2). This means that a recorded ray lf carrying modulated phase information ϕf can be directly mapped to a corresponding 3D spatial point Xp, intrinsically requiring only single-direction fringe projection. The following is the derivation of the phase mapping in the SLF.

Without loss of generality, we suppose that vertical fringe patterns are projected in the SLF system. The modulated phase ϕf related to the recorded ray lf is equal to the corresponding phase code ϕp of the projector, i.e. ϕf=ϕp. As illustrated in Fig. 6, the recorded ray lf defines a corresponding line lp on the phase-coding plane according to the ray constraint in the SLF. Besides, an object point Xp=(Xp,Yp,Zp)T on the recorded ray lf defines an image point xp=(xp,yp)T on the line lp. Similar to [34], the relationship between the modulated phase ϕf and the image point xp can be derived to be a polynomial mapping such that

f(xp,yp)P:ϕf(xp,yp)
where the superscript P denotes a polynomial mapping relationship.

 figure: Fig. 6

Fig. 6 Schematic diagram of the SLF system.

Download Full Size | PDF

Then, a ray back-projected from the image point xp intersects the recorded ray lf at the object point Xp. By combining Eqs. (4) and (5), the ray intersection can be formulized as

{λpx˜p=Xpλfθp+ap=Xp

Equation (8) includes four equations to work out three unknown spatial coordinates. In actual fact, Eq. (6) defines a constraint for Eq. (8). Thus, Eq. (8) has three degrees of freedom; the coordinate Zp can be exactly worked out such that

Zp=apxpθp

If the SLF system is fixed, for a specific light field ray lf, its spatio-angular parameters (ap,bp,θp,φp) are determined. Thus the coordinate Zp is a function of xp that Eq. (9) can be rewritten as

Zp=1fZL(xp)
where the superscript L denotes a linear mapping relationship.

Combining Eqs. (7) and (10), we derive the phase mapping in the SLF, i.e., fZp:ϕfZp, to be

Zp=1n=0Ncnϕfn
where cn is mapping coefficient, and N is polynomial order. Once the coordinate Zp has been worked out, the transverse dimensions Xp and Yp can also be computed in terms of Eq. (2). Therefore, 3D reconstruction from a recorded ray in the SLF is related only to the modulated phase; all dimensional coordinates can be directly mapped from the phase. As a consequence, the phase mapping can achieve high-efficiency SLF 3D reconstruction.

Additionally, the order N of polynomial in Eq. (11) is defined by the nonlinear relationship derived from the lens distortion, which makes the denominator of Eq. (11) a high order polynomial. Nevertheless, a high order polynomial may be susceptible to disturbance when the value of the independent variable is large. In view of this, in practical application the order N should be appropriately adjusted in terms of a trade-off between accuracy and efficiency.

5. Coefficient calibration and 3D reconstruction in the SLF

Based on the phase mapping in the SLF, a 3D scene can be reconstructed employing single-direction fringe projection. To accomplish this, the SLF system should be calibrated first, involving light field ray calibration (Section 3) and phase-mapping coefficient calibration. Figure 7 shows the overall flow chart.

 figure: Fig. 7

Fig. 7 The overall flow chart of system calibration and 3D reconstruction in the SLF.

Download Full Size | PDF

5.1 Coefficient calibration

Coefficient calibration aims to determine the mapping coefficients cn of Eq. (11) for SLF 3D reconstruction adopting the phase mapping. Each ray lf can be calibrated with independent mapping coefficients, enabling the SLF to reconstruct 3D coordinates of object points in different rays’ directions. In this paper, we designed a flexible strategy for the coefficient calibration closely related to the ray calibration. After ray calibration, the projector system parameters and the light field ray parameters are obtained to acquire sufficient spatial points with known 3D coordinates and absolute phases for coefficient fitting.

The flexible coefficient calibration is summarized as follow:

Step 1, from LUT({lf|ap,bp,θp,φp}), determine the light field ray parameters to identify a spatial ray lf in the SLF, then sample a series of spatial points Xpi, i=1,2,, along this spatial ray within a measurement volume.

Step 2, project these sampled spatial points onto the phase-coding plane of the projector to acquire corresponding image points mpi by making use of the calibrated system parameters Kp and kp, then calculate absolute phases ϕpi from these image points; since the modulated phases ϕfi of the sampled points in the spatial ray are equal to the absolute phases, corresponding phase-coordinate pairs (Zpi;ϕfi) can be made up.

Step 3, fit the mapping coefficients cn from these phase-coordinate pairs; in practice, Eq. (11) can be changed to a polynomial form, i.e., 1/Zp=n=0Ncnϕfn, so that polynomial fit can be simply implemented without initial value.

Step 4, for each ray in the SLF, repeat the above three steps to obtain respective mapping coefficients, finally establish a LUT, denoting it as LUT({lf|cn}).

5.2 SLF 3D reconstruction

After ray calibration and coefficient calibration, each ray in the SLF has certain spatio-angular parameters and mapping coefficients. This means that 3D coordinates of object points can be directly mapped from the modulated phases in a specific ray’s direction, enabling high-efficiency multi-view 3D reconstruction in the SLF. The SLF 3D reconstruction contains the following four steps:

Step 1, project single-direction fringe patterns onto the surface of measured objects and record an SLF to demodulate a phase map ϕobj.

Step 2, for a specific ray’s direction, determine the mapping coefficients cn of relevant rays lf from LUT({lf|cn}).

Step 3, substitute the phase map and the mapping coefficients in Eq. (11) to compute Z-coordinates.

Step 4, determine the light field ray parameters (ap,bp,θp,φp) of the relevant rays lf from LUT({lf|ap,bp,θp,φp}), substitute these ray parameters and the Z-coordinates in Eq. (2) to compute corresponding X- and Y-coordinates and finally the 3D scene is reconstructed in this specific ray’s direction.

6. Experiments and analysis

To verify the validity of the proposed method, we set up an SLF system consisting of a light field camera (Lytro 1.0, 11 Megaray) and a DLP projector (Dell M110, 800x1280 pixels) for experimental demonstration and analysis. The SLF system has an optional work distance from 0.2 to 2 meters; however, the valid measurement range is about 200mm due to the limited depth-of-field of projector. A recorded light field can be decoded to be a 4D pixel index from 2D image data captured by the Lytro camera [29], with a trade-off selection between the directional resolution (uf,vf) and spatial resolution(sf,tf). In our experiments, the directional and spatial resolutions were selected to be 11x11 pixels and 378x379 pixels, respectively. To implement the light field ray calibration, an auxiliary CMOS camera (Daheng MER-130-30UM, 1024x1280 pixels) with a 16mm TV lens (PANTAX) was utilized, combining with the projector to set up an FPP system. Figure 8 shows the overall system architecture that performs the SLF system calibration and 3D reconstruction.

 figure: Fig. 8

Fig. 8 Experimental system architecture.

Download Full Size | PDF

6.1 System calibration

As the flow chart in Fig. (7) illustrates, the FFP system calibration should be implemented first. We used a planar target with known circle benchmarks as a standard calibration reference. The planar target was placed on 9 positions in a measurement volume of 300mm x 300mm x 200mm. Since the camera imaging model in Eq. (1) used to derive the phase mapping in the SLF is a back-projection model, the FPP system was calibrated in terms of strategy of ray reprojection [34]. Table 1 lists the intrinsic parameters Kp/c and kp/c of the projector and the camera, respectively, and the structural parameters rs and ts, representing a rigid transformation from the PCS to the CCS. During experiments, the PCS was treated as the WCS, and the projector system parameters were utilized in the mapping coefficient calibration.

Tables Icon

Table 1. FPP system parameters

Then, these calibrated FPP system parameters were used to reconstruct spatial points for the light field ray calibration. A white plane was placed in 10 positions; in each position, orthogonal fringe patterns were projected onto the white plane, and fringe analysis technique including phase computation and unwrapping [36–38] was performed to compute cross absolute phase maps. Through these cross absolute phase maps, 10 pairs of homologous image points in the FPP system were identified to reconstruct the corresponding spatial points on a same light field ray. By using the optimization operation of Eq. (3), we determined the metric spatio-angular parameters for the light field ray from these spatial points. One set of the measured spatial points which were related to a specific ray with light field pixel index (uf,vf,sf,tf)=(6,6,189,189) was taken as an example. The ray was optimally fitted to cross these spatial points, with a maximal (MAX) value of 0.0626mm and a root-mean-square (RMS) value of 0.0385mm of the fitting error, as illustrated in Fig. 9(a). We show other rays with the same directional pixel index (uf,vf)=(6,6) in Fig. 9(b). These rays were parameterized by intersecting the XpYp plane with points (ap,bp) (marked blue points).

 figure: Fig. 9

Fig. 9 Light field ray calibration: (a) one set of the measured spatial points along with the corresponding fitted light field ray; (b) other rays with the same directional pixel index.

Download Full Size | PDF

Next, we employed these projector system parameters and spatio-angular parameters to implement the mapping coefficient calibration. In our experiments, 50 spatial points were sampled along each light field ray in the measurement volume. These sampled points were projected onto the image plane of the projector to obtain the corresponding absolute phases. The phase-coordinate pairs associated with the specific ray in Fig. 9(a) are drawn in Fig. 10(a). It can be observed that the mapping relationship of the phase and the Z-coordinate is monotonic. We employed a 4th order polynomial to fit this mapping relationship. Table 2 lists the fitted mapping coefficients. The fitted polynomial curve is also drawn in Fig. 10(a), and the corresponding fitting error curve with a MAX value of 4.2861x10−7mm and a RMS value of 1.5578x10−7mm is drawn in Fig. 10(b). The fit error is quite small, indicating that the polynomial order selected in our experiments is sufficiently accurate for field application.

 figure: Fig. 10

Fig. 10 Mapping coefficient calibration: (a) the sampled spatial points and the corresponding fitted polynomial curve; (b) the fitting error curve.

Download Full Size | PDF

Tables Icon

Table 2. Mapping coefficients

6.2 3D reconstruction

After the SLF system calibration has been finished, the SLF system can perform high-quality multi-view 3D reconstruction. We reconstructed a plaster model in directional pixel index (uf,vf)=(6,6) using two methods, ray intersection and phase mapping, respectively. The respective 3D models are showed in Figs. 11(a) and 12(b), in which the upper-left gird array represents the microlens array with 11x11 directional pixels, and the red pixel denotes the specific directional pixel index. In addition, Fig. 11(b) shows the point cloud associated with the 3D model in Fig. 11(a). The reconstruction results of these two methods are the same. However, the ray intersection requires orthogonal fringe projection. By comparison, the proposed phase mapping needs only single-direction fringe projection, directly mapping the phase to the 3D coordinates to achieve high-efficiency SLF 3D reconstruction. Based on the phase mapping, we also reconstructed the measured object in the other two directional pixel indices (uf,vf)=(2,2) and (10,10), as shown in Figs. 12(a) and 12(c), respectively. The angular range among these rays’ directions is small; for instance, for the spatial position at (sf,tf)=(189,189), the included angles of the ray with (uf,vf)=(6,6) and the rays with (uf,vf)=(2,2) and (10,10) are 2.12x10−3rad and 1.13x10−3rad, respectively. One can observe that the reconstruction results in different rays’ directions have some distinction (labeling via different color boxes in Fig. 12), in particular the limbic areas.

 figure: Fig. 11

Fig. 11 SLF 3D reconstruction of a plaster model using ray intersection in a specific ray’s direction: (a) 3D model; (b) point cloud.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 SLF 3D reconstruction of a plaster model using phase mapping in different rays’ directions.

Download Full Size | PDF

To further illuminate the distinction of 3D reconstruction in different rays’ directions, we used a standard sphere in a test. The standard sphere was also reconstructed by means of the phase mapping in the three directional pixel indices, i.e., (uf,vf)=(2,2), (6,6) and (10,10), to compare their reconstruction precisions. Figure 13 shows the reconstructed 3D models of the standard sphere. These 3D models were utilized to perform sphere fitting. Relevant data from the sphere fitting, involving the center and radius of the sphere and the MAX and RMS values of the fitting error, are listed in Table 3.

 figure: Fig. 13

Fig. 13 SLF 3D reconstruction of a standard sphere using phase mapping in different rays’ directions.

Download Full Size | PDF

Tables Icon

Table 3. Data of sphere fitting and fitting error (mm)

The data in Table 3 distinctly indicate that the position (the center) and the shape (the radius) of the reconstructed sphere change with the ray’s direction. Especially, the distinction of depth dimension (i.e., Z-coordinate) caused by the ray’s direction is larger than that of transverse dimensions (i.e., X- and Y-coordinates). We think this is because the uncertainty of the depth reconstruction is more sensitive to the measurement angle relative to the normal direction of object surface. Although the light field camera records rays in only a narrow range of angles due to small numerical aperture, these rays’ directions affect the quality of 3D reconstruction to a certain extent. One can observe from these data that the reconstruction precision in the directional pixel index (uf,vf)=(6,6) is higher than that in the others. Therefore, the capability of multi-view 3D reconstruction enables the SLF to optimize the measurement performance. The effect of the ray’s direction on 3D reconstruction and the optimization of measurement performance in the SLF system will be explored in our further research work.

6.3 Limitation of proposed method

In our experiments, because using spatial multiplexing to encode the 4D spatio-angular information in a 2D image sensor, the light field camera endures an inherent trade-off between spatial and directional resolutions. As a consequence, these reconstructed 3D models have low spatial resolutions, which may lead to loss of some details. The recorded angular range is small due to the limitation of numerical aperture. But in microscopic imaging with large numerical aperture, the proposed method can be adopted to preserve more angular information for more excellent performance, which is a possible study direction. In addition, the light field camera requires manual-control image acquisition and time-consuming light field decoding, for example in our experiments about 3 seconds for each frame of fringe projection and image capture and about 25 seconds for light field data transmission and decoding. Although the efficiency of phase mapping is high, the whole reconstruction speed is quite slow. In the further work, light field cameras can be designed for software-control automatic acquisition, and the recorded light field can be decoded using parallel acceleration, which will significantly improve the reconstruction speed.

7. Conclusion

We successfully performed the ray calibration to determine each light field ray with metric spatio-angular parameters, based on which we proposed a novel active method for SLF 3D reconstruction. In accordance with the ray parametric equation and the camera imaging model, we derived the phase mapping in the SLF such that 3D coordinates can be directly mapped from the phase with single-direction fringe projection. Correspondingly, we designed a flexible calibration strategy to determine each light field ray with specific phase-mapping coefficients. Experimental demonstration has verified that the proposed method is suitable for high-efficiency multi-view 3D reconstruction and has the potential to optimize the measurement performance in the SLF.

Appendix

The right side of Eqs. (4) and (5) are the same, so we can get

λpx˜p=λfθp+ap

Left multiplying Eq. (12) by x˜pT[ap]×, its left side become

λpx˜pT[ap]×x˜p=0
and its right side become
x˜pT[ap]×(λfθp+ap)=λfx˜pT[ap]×θp+x˜pT[ap]×ap=λfx˜pT[ap]×θp
From Eqs. (12)–(14), we can get x˜pT[ap]×θp=0, i.e., Eq. (6).

Funding

National Key R&D Program of China (2017YFF0106401); National Natural Science Foundation of China (NSFC) (61377017); Sino-German Center for Research Promotion (SGCRP) (GZ 1391); China Postdoctoral Science Foundation (CPSF) (2017M622767); The Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province (GD201608).

References and links

1. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH (ACM, 1996), pp. 31–42.

2. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

3. A. Orth and K. B. Crozier, “Light field moment imaging,” Opt. Lett. 38(15), 2666–2668 (2013). [CrossRef]   [PubMed]  

4. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]   [PubMed]  

5. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6(9), 3179–3189 (2015). [CrossRef]   [PubMed]  

6. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

7. N. C. Pegard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016). [CrossRef]  

8. B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24(3), 765–776 (2005). [CrossRef]  

9. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CSTR (2005), pp. 1–11.

10. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26(3), 69 (2007). [CrossRef]  

11. C. Frese and I. Gheta, “Robust depth estimation by fusion of stereo and focus series acquired with a camera array,” in Proceedings of IEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (IEEE, 2006), pp. 243–248. [CrossRef]  

12. K. Atanassov, S. Goma, V. Ramachandra, and T. Georgiev, “Content-based depth estimation in focused plenoptic camera,” Proc. SPIE 7864, 78640G (2011). [CrossRef]  

13. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]   [PubMed]  

14. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 41–48. [CrossRef]  

15. C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Trans. Graph. 32(4), 73 (2013). [CrossRef]  

16. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2013), pp. 673–680. [CrossRef]  

17. H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2015), pp. 3451–3459. [CrossRef]  

18. M. W. Tao, P. P. Srinivasan, S. Hadap, S. Rusinkiewicz, J. Malik, and R. Ramamoorthi, “Shape estimation from shading, defocus, and correspondence using light-field angular coherence,” IEEE Trans. Pattern Anal. Mach. Intell. 39(3), 546–560 (2017). [CrossRef]   [PubMed]  

19. Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, and Q. Dai, “Light-field depth estimation via epipolar plane image analysis and locally linear embedding,” IEEE Trans. Circ. Syst. Video Tech. 27(4), 739–747 (2017). [CrossRef]  

20. C. Hahne, A. Aggoun, V. Velisavljevic, S. Fiebig, and M. Pesch, “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016). [CrossRef]   [PubMed]  

21. Y. Chen, X. Jin, and Q. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017). [CrossRef]   [PubMed]  

22. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3D imaging,” Opt. Express 24(18), 20324–20334 (2016). [CrossRef]   [PubMed]  

23. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010). [CrossRef]  

24. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

25. A. Gershun, “The light field,” J. Math. Phys. 18(1-4), 51–151 (1939). [CrossRef]  

26. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.

27. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

28. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012). [CrossRef]  

29. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), 1027–1034. [CrossRef]  

30. C. Heinze, S. Spyropoulos, S. Hussmann, and C. Perwass, “Automated robust metric calibration of multi-focus plenoptic cameras,” in Proceedings of IEEE International Instrumentation and Measurement Technology Conference (2015), pp. 2038–2043. [CrossRef]  

31. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

32. Z. Zhang, H. Ma, S. Zhang, T. Guo, C. E. Towers, and D. P. Towers, “Simple calibration of a phase-based 3D imaging system based on uneven fringe projection,” Opt. Lett. 36(5), 627–629 (2011). [CrossRef]   [PubMed]  

33. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23(5), 6846–6857 (2015). [CrossRef]   [PubMed]  

34. Z. Cai, X. Liu, A. Li, Q. Tang, X. Peng, and B. Z. Gao, “Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry,” Opt. Express 25(2), 1262–1277 (2017). [CrossRef]   [PubMed]  

35. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge, 2003).

36. X. Peng, Z. Yang, and H. Niu, “Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm,” Opt. Commun. 224(1-3), 35–44 (2003). [CrossRef]  

37. Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015). [CrossRef]   [PubMed]  

38. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Schematic diagram of 4D light field.
Fig. 2
Fig. 2 Schematic diagram of camera imaging process.
Fig. 3
Fig. 3 Ray intersection for multi-view 3D reconstruction in the SLF.
Fig. 4
Fig. 4 Light field ray calibration.
Fig. 5
Fig. 5 Phase encoding in the SLF.
Fig. 6
Fig. 6 Schematic diagram of the SLF system.
Fig. 7
Fig. 7 The overall flow chart of system calibration and 3D reconstruction in the SLF.
Fig. 8
Fig. 8 Experimental system architecture.
Fig. 9
Fig. 9 Light field ray calibration: (a) one set of the measured spatial points along with the corresponding fitted light field ray; (b) other rays with the same directional pixel index.
Fig. 10
Fig. 10 Mapping coefficient calibration: (a) the sampled spatial points and the corresponding fitted polynomial curve; (b) the fitting error curve.
Fig. 11
Fig. 11 SLF 3D reconstruction of a plaster model using ray intersection in a specific ray’s direction: (a) 3D model; (b) point cloud.
Fig. 12
Fig. 12 SLF 3D reconstruction of a plaster model using phase mapping in different rays’ directions.
Fig. 13
Fig. 13 SLF 3D reconstruction of a standard sphere using phase mapping in different rays’ directions.

Tables (3)

Tables Icon

Table 1 FPP system parameters

Tables Icon

Table 2 Mapping coefficients

Tables Icon

Table 3 Data of sphere fitting and fitting error (mm)

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

{ X c = R c X w + t c λ c x ˜ c = [ I | 0 ] X ˜ c x c = x c Δ ( x c ; k c ) m ˜ c = K c x ˜ c
{ X p = θ p Z p + a p Y p = φ p Z p + b p
arg a p , b p , θ p , φ p min i Z p i ( θ p , φ p ) T + ( a p , b p ) T ( X p i , Y p i ) T 2
λ p x ˜ p = X p
λ f θ p + a p = X p
x ˜ p T [ a p ] × θ p = 0
f ( x p , y p ) P : ϕ f ( x p , y p )
{ λ p x ˜ p = X p λ f θ p + a p = X p
Z p = a p x p θ p
Z p = 1 f Z L ( x p )
Z p = 1 n = 0 N c n ϕ f n
λ p x ˜ p = λ f θ p + a p
λ p x ˜ p T [ a p ] × x ˜ p = 0
x ˜ p T [ a p ] × ( λ f θ p + a p ) = λ f x ˜ p T [ a p ] × θ p + x ˜ p T [ a p ] × a p = λ f x ˜ p T [ a p ] × θ p
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.