Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-resolution 3D imaging in light-field microscopy through Stokes matrices and data fusion

Open Access Open Access

Abstract

The trade-off between the lateral and vertical resolution has long posed challenges to the efficient and widespread application of Fourier light-field microscopy, a highly scalable 3D imaging tool. Although existing methods for resolution enhancement can improve the measurement result to a certain extent, they come with limitations in terms of accuracy and applicable specimen types. To address these problems, this paper proposed a resolution enhancement scheme utilizing data fusion of polarization Stokes vectors and light-field information for Fourier light-field microscopy system. By introducing the surface normal vector information obtained from polarization measurement and integrating it with the light-field 3D point cloud data, 3D reconstruction results accuracy is highly improved in axial direction. Experimental results with a Fourier light-field 3D imaging microscope demonstrated a substantial enhancement of vertical resolution with a depth resolution to depth of field ratio of 0.19%. This represented approximately 44 times the improvement compared to the theoretical ratio before data fusion, enabling the system to access more detailed information with finer measurement accuracy for test samples. This work not only provides a feasible solution for breaking the limitations imposed by traditional light-field microscope hardware configurations but also offers superior 3D measurement approach in a more cost-effective and practical manner.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As an emerging 3D imaging technology, light-field microscopy (LFM) can simultaneously capture both spatial and angular information of the incident light by placing a microlens array (MLA) in front of an image sensor, allowing computational retrieval of full 3D volume of a specimen from a single camera image. Unlike conventional 3D imaging techniques that accumulates spatial information in a sequential or scanning fashion, the unique four-dimensional (4D) imaging ability of LFM effectively liberates volume acquisition time from the spatial scales. Its highly scalability and potential for miniaturization render LFM a versatile tool in various applications [15]. However, the widespread adoption of LFM has been hindered by challenges such as low resolution of the limited angular and spatial information available in a single snapshot, inhomogeneous resolution in reconstructed depth images, and the lack of lateral shift invariance. These constraints lead to degraded spatial resolution, the emergence of grid-like artifacts, and an escalation in computational complexity [68].

To overcome these problems, advanced optical designs and powerful algorithms have been developed [915] to improve LFM spatial resolution, field of view (FOV) and depth of field (DOF). However, the huge computational complexity problem remains fundamentally unsolved, until the introduction of Fourier light-field microscopy (FLFM) technique [16,17]. By inserting the MLA at pupil plane, FLFM records the 4D light-field in the Fourier domain (FD). This innovative imaging scheme allows allocating the spatial and angular information of the incident light in a consistently aliased manner, which effectively avoiding any artifacts due to redundancy. By processing signals in a parallel fashion within the FD, image formation can be expressed using a unified 3D point spread function (PSF), resulting in a substantially reduction in computational cost by more than two orders of magnitude. FLFM provides a promising path to improve the current LFM techniques and achieve high-quality imaging and rapid light-field reconstruction. However, the division of the numerical aperture into multiple parts at a single sensor plane with limited size in FLFM also introduces limitations in optimizing optical parameters, as different performance metrics such as spatial resolution, depth of field, and field of view are coupled together [1821]. To enhance the performance of FLFM, several variants of FLFMs have been proposed. These include advanced optical optimization involving two groups of MLAs and two measured PSF to extend the DOF [22], hybrid PSF to improve deconvolution algorithm performance [23], continuous refocusing depth improvement algorithm via shear transformation in the light-field domain [24], PSF optimization algorithm [25], sparse decomposition algorithm [26], as well as confocal and scanning-based approaches [11,13] aimed at improving imaging depth and signal-to-background ratio (SBR) etc. Furthermore, deep learning-based microscopy image enhancement algorithms have also been introduced, significantly elevating imaging resolution and signal-to-noise ratio (SNR) [2730], while maintaining promising computational speed due to parallel computing capabilities. However, challenges remain, including the difficulty of obtaining higher quality labeled data due to potential incompatibilities between LFM and other modalities, and the inherent trade-off between angular and spatial resolution, which remains unresolved without the introduction of additional information or techniques [7,1719].

As another dimension of light-field information, polarization has shown its potential for synergizing with other 3D imaging techniques in terms of obtaining more refined 3D reconstruction results [3137]. Unfortunately, this dimension has received limited attention and is ignored in LFM. To further improve the system resolution and 3D reconstruction accuracy, this paper proposes a resolution enhancement scheme for Fourier light-field microscopy system with polarization and light-field data fusion. By introducing the surface normal vector information of the measured sample obtained by polarization and combining it with the light-field 3D point cloud data, high-quality imaging and rapid light-field 3D reconstruction are enabled. And result with improved resolution and finer details can be obtained.

The rest of the paper is organized as follows. Section 2 describe the configuration of polarization-integrated FLFM system and fusion method. In section 3, experiments and results with the proposed method are presented and discussed. Finally, this paper is summarized in section 4.

2. Methods

2.1 Principle of polarized 3D reconstruction

According to Fresnel's law of reflection, the reflected light for most object surfaces consists of three components: the specular polarized light, the diffuse polarized light and the unpolarized light. The polarization properties in reflected light can indicate the depth gradient of the object surfaces, which enables the computation of the surface normal vectors forms the theoretical basis for 3D reconstruction using polarization information [38]. By using a polarization camera or placing a rotating polarizer in front of a normal camera, images with different polarization angles can be obtained. The relation of image light intensity $I({{\theta_{polar}},\varphi } )$ and polarization angle ${\theta _{polar}}$ can be denoted by the sinusoidal function below:

$${I({{\theta_{polar}},\varphi } )= \frac{{{I_{max}} + {I_{min}}}}{2} + \frac{{{I_{max}} - {I_{min}}}}{2}\cos ({2{\theta_{polar}} - 2\varphi } )}$$
where ${I_{max}}$ denotes the light intensity when light vector dominates the vibration along the vertical direction; ${I_{min}}$ denotes the light intensity when light vector vibration perpendicularly to it; and $\mathrm{\varphi }$ is the azimuth of polarization (AoP).

The light intensity and polarization status of those images can be described by using the Stokes vector which can be acquired from light intensity information of three or more polarization angles [31,39]. In this paper, classical four polarization angles (${0^\circ }$, ${45^\circ }$, ${90^\circ }$ and ${135^\circ }$) are used to obtain the Stocks components (${S_0},\; {S_1}\; and\; {S_2}$), degree of polarization (DoP) $\rho $ and AoP $\mathrm{\varphi }$.

$$\begin{array}{c}{S_0} = \frac{{{I_0} + {I_{45}} + {I_{90}} + {I_{135}}}}{2}\\{S_1} = {I_0} - {I_{90}}\\{{S_2} = {I_{45}} - {I_{135}}}\end{array}$$
$${\rho = \frac{{\sqrt {{S_1}^2 + {S_2}^2} }}{{{S_0}}} = \frac{{\sqrt {{{({{I_0} - {I_{90}}} )}^2} + {{({{I_{45}} - {I_{135}}} )}^2}} }}{{\frac{1}{2}({{I_0} + {I_{45}} + {I_{90}} + {I_{135}}} )}}}$$
$${\varphi = \frac{1}{2}{{\tan }^{ - 1}}\frac{{{S_2}}}{{{S_1}}} = \frac{1}{2}{{\tan }^{ - 1}}\frac{{{I_{45}} - {I_{135}}}}{{{I_0} - {I_{90}}}}}$$

${I_0}$, ${I_{45}}$, ${I_{90}}$ and ${I_{135}}$ represent the image light intensity taken under polarization angles ${0^\circ }$, ${45^\circ }$, ${90^\circ }$ and ${135^\circ }$, respectively. We assume that the target surface is denoted by $z({x,y} )$. The object’s surface shape is defined by the normal vector at each point, with the default normal vector having a modulus of 1. This normal vector, represented as $\overrightarrow {{n_p}} $, is described by the zenith angle $\theta $ and azimuthal angle $\alpha $, where the threshold of $\alpha $ is $[{ - \pi ,\; \pi } ]$, and for $\theta $, it falls within $[0,\; \; \frac{\pi }{2}$].

$${\overrightarrow {{n_p}} = \{ - \frac{{\partial z({x,y} )}}{{\partial x}},\; \frac{{\partial z({x,y} )}}{{\partial y}},\; 1\} = \{{\tan \theta \cos \alpha } ,\; {\tan \theta \sin \alpha ,\; 1} \}}$$

For samples with refractive index n, there are different methods for calculating the normal vector zenith angle $\theta $ and azimuthal angle $\alpha $ when the surface reflection mode is specular and diffuse reflection. In specular reflection polarization, the formula for calculating the normal vector azimuth $\alpha $ from the AoP $\varphi $ is given by:

$${\alpha = \varphi + \frac{\pi }{2}\; or\; \varphi + \frac{{3\pi }}{2}}$$

And the formula for calculating the zenith angle $\theta $ using the polarisation degree $\rho $ is as follows:

$${\rho = \frac{{2\textrm{si}{\textrm{n}^2}\theta \; \textrm{cos}\theta \sqrt {{n^2} - {{\sin }^2}\theta } }}{{{n^2} - \textrm{si}{\textrm{n}^2}\theta - {n^2}\textrm{si}{\textrm{n}^2}\theta + 2\textrm{si}{\textrm{n}^4}\theta }}}$$

Whereas, in diffuse polarization the parameters are calculated as below:

$${\alpha = \varphi \; or\; \varphi + \pi }$$
$${\rho = \frac{{{{(n - 1/\textrm{n})}^2}{{\sin }^2}\theta }}{{2 + 2{n^2} - {{(n + 1/n)}^2}\textrm{si}{\textrm{n}^2}\theta + 4\textrm{cos}\theta \sqrt {{n^2} - {{\sin }^2}\theta } }}}$$

Through images taken at diverse polarization angles, the normal vectors of the object's surface are derived. By integrating the normal vectors of different positions on the surface of the object, the relative depth information can be obtained. By interpreting the depth information, we can finally obtain the three-dimensional surface shape.

2.2 FOV matching and solution for the angle ambiguity problem

The volumetric point cloud data of the sample surface topography can be obtained by various Fourier light-field 3D reconstruction algorithms, but alignment of FLFM multi-view stereo geometry information and the polarization data, namely the FOV matching, remains a challenging problem in the data fusion due to the accumulation of aberration errors between different optical systems and transformation errors between different coordinate systems and images [4042]. In this work, we proposed a hardware-based approach to avoid alignment problems caused by multiple viewpoints or distinct optical paths. It was achieved by using the same set of optical paths to simultaneously acquire both polarization and light-field image information.

As shown in Fig. 1, by inserting a polarizer in front of the FLFM lens, the system configuration enables both polarization and light-field image information to be acquired with the same camera. As displayed in Fig. 1(left), here micro lens array applied in the FLFM is arranged in hexagonal shape, with a sub-lens right at the center, allowing the light-field image to be obtained. As 3D cloud point data distribution reconstructed with FLFM algorithms in X-Y plane meets with the center sub image (NO.4) of the captured light filed image. We choose the center one as reference to retrieve the polarization information of the sample and compute its surface normal vectors. Thus, the obtained 3D point cloud data from FLFM reconstruction and surface normal vectors acquired through polarization information can be directly mapped to each other without extra scaling or registration. This also allows the polarization-obtained vectors to be applied to non-continuous surfaces and the case of multiple discontinuous targets which is still limited by the influence of the intermittent points on the surface in the field of 3D polarimetric imaging.

 figure: Fig. 1.

Fig. 1. Fourier light-field microscope: system setup (left); schematic of sub-images captured by the camera sensor (right).

Download Full Size | PDF

In addition to the field of view (FOV) matching issue, addressing azimuth ambiguity and zenith angle deviation problems as shown in Eq. (6) and Eq. (8) when employing polarization data for normal vector calculation is critical for enabling the fusion of polarization data with other information. In this study, as the test sample is treated as Lambertian body within FLFM 3D imaging, which reflects light diffusely, we focus specifically on optimizing the angle value ambiguity challenge associated with diffuse polarimetric imaging.

In our method, the azimuth angle $\alpha $ is determined with the help of rough 3D cloud point data of the sample surface obtained with the deconvolution based FLFM 3D reconstruction algorithm [4345]. The retrieval of normal vector $\vec{n}$ from the 3D point cloud, involves bicubic interpolation in the x, y and z directions to determine the surface normal corresponding to each pixel point. To facilitate interpolation at the boundaries, quadratic extrapolation was employed to extend the data. Following the bicubic data fitting, diagonal vectors are computed and subsequently crossed to derive the normal vectors at each vertex. Then, $\vec{n}$ is dot-producted with the result $\overrightarrow {{n_p}} $ obtained from the polarization parameter calculation. When $\vec{n}\cdot \overrightarrow {{n_p}} > 0$, it indicates that the two are similar, otherwise $\overrightarrow {{n_p}} $ is corrected to be in the same direction as $\vec{n}$. Thus, determination of azimuthal values and the correction of surface normal vectors are achieved. This enables the fast determination of the azimuth value and the correction of the surface normal vector, which is a prerequisite for accurate surface reconstruction.

2.3 Data fusion: depth reconstruction and resolution enhancement

$$\left\{ \begin{array}{c} {{S_x}({m,n + 0.5} )= \frac{{{S_x}({m,n + 1} )+ {S_x}({m,n} )}}{2} = \frac{{\phi ({m,n + 1} )- \phi ({m,n} )}}{d}}\\ {{S_y}({m + 0.5,n} )= \frac{{{S_y}({m + 1,n} )+ {S_y}({m,n} )}}{2} = \frac{{\phi ({m + 1,n} )- \phi ({m,n} )}}{d}} \end{array} \right.$$

Based on the coarse point cloud and the corrected surface normal vector obtained above, data fusion between the light-field point cloud and polarization data as well as depth reconstruction can be achieved. In this work, the data fusion employs a scheme similar to the Southwell region fitting method, which is widely used in deflectometric measurement [40]. As depicted in Fig. 2(a), for a surface microelement $m({x,y} )$, its normal vector n of can be decomposed into the gradient of depth ($\frac{{\partial z}}{x}$ and $\frac{{\partial z}}{y}$) in x and y direction. Here $({x,y} )$ denotes the coordination of the microelement on the $x - y$ plane, namely the image plane. And for the image pixel point $\phi ({m,n} )$ in Fig. 2(b), its local surface slope (gradient) as well as the displacement d and height difference corresponding to the intermediate point satisfy the relationship shown in Eq. (10).

Where ${S_x}({m,n} )$ and ${S_y}({m,n} )$ are the local surface slopes corresponding to pixel points $({m,n} )$ in vertical and horizontal direction, respectively; $\phi ({m,n} )$ represents its depth to the focus plane; d represents the magnitude of the spacing of the pixel points, namely the pixel size. Based on this relationship, we can correct and optimize the coarse 3D point cloud by using the obtained polarization normal vectors to obtain a more detailed surface topography, which is closer to the real value. The whole data fusion process is illustrated in the diagram shown in Fig. 3, which incorporates visual simulation representations of the processing steps to facilitate comprehension. By continuous image frames acquisition and real-time image processing, polarization information of the current image can be obtained by combining the previous three frames. Thus, only the first three frames are lost, and each subsequent frame can achieve continuous polarization information acquisition and data fusion in one shot in static sample scenarios. But for scenarios with movement, this method may result in a 4-fold reduction in temporal resolution.

 figure: Fig. 2.

Fig. 2. Data fusion: (a) normal vector and gradient of a surface microelement; (b) relationship between surface neighboring microelements.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Data fusion process: depth reconstruction and resolution enhancement.

Download Full Size | PDF

3. Experimental studies

In this study, a versatile polarization-integrated FLFM system was setup to experimentally validate the effectiveness of our proposed approach. The experimental configuration, illustrated in Fig. 4, involves an FLFM system and a rotatable linear polarizer. As shown in Fig. 4, the FLFM employs a 10X objective lens (Mitutoyo Plan Apo Infinity Corrected Long WD Objective, #46-144) with a considerable 34 mm working distance. The optical relay lenses, forming a 1:1 magnification 4f system, consist of identical parameters for both lenses: f1 and f2. These lenses are positive double-glued achromatic lenses (#145036, Grand Unified Optics) with a 20 mm diameter and a 63 mm focal length each. Within the system, a hexagonally arranged microlens array (APH-Q-P2000-R7.8, Advanced Microoptic Systems Gmbh) maded of fused silica with a 7.8 mm focal length and 2000 $\mu $m sublens pitch is utilized and placed in front of the image sensor. For image acquisition, a CMOS camera (C42 from Raytrix GmbH) with a resolution of 41.3 MP and pixel dimensions of 1.1 $\mu $m x 1.1 $\mu $m is utilized, with its sensor array plane placed just at the focal length of the microlens. A linear polarizer (GCL-05 from Daheng Optics, demountable) mounted on a motorized rotator stage (FPSTA-8MPR16-1 from Standa Ltd., Lithuania) forms the rotatable linear polarizer, which is placed in front of the objective lens to control the polarization angle. This motorized rotating stage enables continuous 360° rotations with a resolution of 0.75 arcmin, accommodating optical components with diameter up to 25.4 mm. A LED circular light source is fixed around the linear polarizer and used for the system illumination. The sample is placed on a specimen holder at a distance of 34 mm from the objective lens of the system, which corresponds to the working distance of the objective. Both the camera and motorized rotator are connected to a laptop (Y9000P, Lenovo) via USB cables for synchronized device control, data acquisition and image processing. By sequentially adjusting the polarization angles to 0°, 45°, 90°, and 135°, simultaneous acquisition of polarization and light-field data are achieved through the same optical paths. Subsequently, employing the data fusion and computation process outlined in Fig. 3, result with improved axial resolution and enhanced accuracy in 3D reconstruction is obtained.

 figure: Fig. 4.

Fig. 4. Polarization-integrated FLFM experimental system setup.

Download Full Size | PDF

To demonstrate the proposed technique quantitatively, we performed an initial test on a quartz capillary tube with polyamide coating layer on its surface (WH-MXG-150280) with an outer diameter of 0.280 ± 0.02 $\textrm{mm}$, whose actual value measured under a commercial microscope is 283 $\mathrm{\mu}\textrm{m}$ as shown in Fig. 5(a). The refractive index of the polyamide coating layer is $1.5869$ according to the product specifications. This type of sample is often used in the fields like micro and nano sensing, microfluidics, and gas-liquid chromatographic spectroscopy due to its special physical structure and good optical properties, and it is one of the common targets for optical inspection. The polyamide coating layer scatters and reflects most of the light, providing a better light-field image.

 figure: Fig. 5.

Fig. 5. Images of quartz capillary tube: (a) microscopic measurement of the quartz capillary; (b) light-field image of the quartz capillary tube under polarization angle 0°.

Download Full Size | PDF

By controlling the motorized rotator, the polarization angle is subsequently adjusted to 0°, 45°, 90° and 135°. And the images of the quartz capillary tube under those polarization angles are captured. In our approach, the center sub-lens image, denoted by the red rectangle in Fig. 5(b), is selected as the reference for the polarization parameter extraction. And LFI at polarization angle 0° is used for coarse point cloud computation with 3D reconstruction algorithm.

Using the captured polarization images in Fig. 6(a), polarization parameters of the quartz capillary tube, including degree of polarization $\rho $ (Fig. 6(b)) and azimuth of polarization $\varphi $ (Fig. 6(c)), are first computed in radians. Subsequently, azimuthal angle $\alpha $ shown in Fig. 6(e) and zenith angle $\theta $ displayed in Fig. 6 (f) are generated. The normal vectors of the sample surface, as illustrated in Fig. 6(d), undergo refinement using the coarse point cloud derived from the 0° polarization light-field image. The resultant normal vector modulus after optimization is depicted in Fig. 6(g). 3D reconstruction result prior to data fusion process is depicted in Fig. 7(b). The point cloud data is clustered and filtered with Open3D to ensure that there is only one value kept when fusing with the polarization data. Finally, optimized normal vectors and FLFM 3D data fusion is performed with the region fitting method described in Fig. 3. 3D reconstruction results after the data fusion process are displayed in Fig. 7(c).

 figure: Fig. 6.

Fig. 6. Polarization parameters of the quartz capillary tube: (a) polarization images; (b) degree of polarization (DoP) $\rho $; (c) azimuth of polarization (AoP) $\varphi $; (e) azimuthal angle $\alpha $; (f) zenith angle $\theta $; (d) and (g) normal vectors and its modulus.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Measured results of the quartz capillary tube: (a) 3D result measured with a ZYGO white light interferometer; (b) 3D result reconstructed prior to data fusion; (c) 3D result reconstructed post data fusion; (d) 3D result reconstructed with shearlet-based light-field 3D reconstruction algorithm; (e) depth residual of (a) & (b); (f) depth residual of (a) & (c); (g) depth residual of (a) & (d).

Download Full Size | PDF

To highlight the differences between the outcomes obtained through the proposed method and alternative approaches, we compared the results before data fusion Fig. 7(b), the results after data fusion and those obtained using the shearlet-based Fourier light-field 3D reconstruction algorithm in Sergio’s digital refocusing method [24] shown in Fig. 7(d). As can be seen from the 3D results, the sample surface shape was successfully reconstructed t by the 3D point clouds maps, and the depth value of all point cloud maps vary from 0 to 280 $\mathrm{\mu}\textrm{m}$. However, the discrepancies between them are not very obvious and difficult to distinguish from the point cloud solely.

In order to further assess the disparities between the three results, we calculated the residuals with Open3D [46] between those three results in reference to the result obtained through commercial ZYGO white light interferometry Fig. 7(a), and quantitatively compared their standard-deviation-error (STDE) and root-mean-square-error (RMSE) values. From the depth residual results in Fig. 7(e), (f) and (g), it’s clear that the result before data fusion has the maximum depth deviation of from the reference results, followed by the result of the shear algorithm, and the result after data fusion has the smallest errors. Further quantitative analysis of the STDE and RMSE values based on the residual data is presented in Table 1. The deviation of the results prior to data fusion from the reference results is indicated by a STDE value of 20.3 $\mathrm{\mu}\textrm{m}$ and an RMSE value of 13.7 $\mathrm{\mu}\textrm{m}$. The shearlet-based algorithm results demonstrate a STDE value of 13.7 $\mathrm{\mu}\textrm{m}$ and an RMSE value of 4.7 $\mathrm{\mu}\textrm{m}$, while the result of our algorithm shows the lowest STDE value of 1.2 $\mathrm{\mu}\textrm{m}$ and an RMSE value of 1.3 $\mathrm{\mu}\textrm{m}$. This indicates a longitudinal resolution result deviation from the true value by 1.3 $\mathrm{\mu}\textrm{m}$ after data fusion, whereas the theoretical DOF of the system calculated according to the design guidelines [20,21] is 465.8 $\mathrm{\mu}\textrm{m}$ with a longitudinal resolution of only 38.9 $\mathrm{\mu}\textrm{m}$, and the ratio of the depth resolution to DOF is 8.4%, which indicates that the axial resolving capability is low. These results demonstrate that the proposed polarization data fusion method have effectively enhanced the longitudinal resolution and greatly improved the 3D reconstruction results and measurement accuracy of FLFM systems.

Tables Icon

Table 1. Comparison of measurement errors for a quartz capillary tube.

To further confirm the enhanced resolving power and the capability to access detailed information of rough and discontinuous scenes achieved through the proposed method, we conducted a test using a photosensitive 3D printed pattern sample shown in Fig. 8(a). Notably, the sample exhibits multiple rough and discontinuous regions along its inner pattern edge, marked in red. Following a similar procedure as described earlier, a series of polarization images were captured and normal vectors of the sample's surface, shown in Fig. 8(b), was derived. Subsequently, 3D point cloud results prior to data fusion (Fig. 9(b)), post data fusion (Fig. 9(c)), and using the reconstruction algorithm derived from Sergio's method (Fig. 9(d)) were obtained.

 figure: Fig. 8.

Fig. 8. Images of a 3D printed pattern: (a) microscopic observation; (b) retrieved normal vectors by the proposed system.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Measured results of the 3D printed pattern: (a) 3D result measured with a ZYGO white light interferometer; (b) 3D result reconstructed prior to data fusion; (c) 3D result reconstructed post data fusion; (d) 3D result reconstructed with shearlet-based light-field 3D reconstruction algorithm; (e) depth residual of (a) and (b); (f) depth residual of (a) and (c); (g) depth residual of (a) and (d).

Download Full Size | PDF

Analyzing the reconstructed 3D point cloud results, it can be seen that the result acquired prior to data fusion in Fig. 9(b) lacks some edge detail information, with a larger fluctuation range in the depth data exceeding 120 $\mu m$, whereas the other two results are below 120 µm. The reference result obtained with the commercial ZYGO white light interferometer is also depicted in Fig. 9(a), and depth residuals relative to the reference result are computed separately. As depicted in the residual maps in Fig. 9(e), (f) and (g), it’s apparent that the outcomes before the data fusion exhibits several cases of ineffective reconstruction in the fringe region. Sergio’s method exhibits less severe shortcomings (Fig. 9(g)), but there are still a large number of missing data and error regions. Conversely, the post data fusion result (Fig. 9 (f)) has the smallest errors and is the closest match to the reference. This indicates that the proposed data fusion method has a much better resolving power and is able to acquire more detailed information of rough and discontinuous scenes.

For a more quantitative evaluation, comparison of these three results using STDE and RMSE values is summarized in Table 2. Compared with the prior data fusion result, the STDE value of the optimized depth residual map was reduced from 51.9 $\mathrm{\mu}\textrm{m}$ to 2.8 $\mathrm{\mu}\textrm{m}$ and RMSE was reduced from 11.8 $\mathrm{\mu}\textrm{m}$ to 0.9 $\mathrm{\mu}\textrm{m}$, indicating significant enhancement in depth resolution and improvements in reconstruction result fineness. In contrast, the residual depth map STDE value of the results obtained by Sergio's method was 17.3 $\mathrm{\mu}\textrm{m}$ and the RMSE value was 5.1 $\mathrm{\mu}\textrm{m}$, which are much higher than those obtained by the present method. This shows that the present method has superior 3D reconstruction capability. In addition, the ratio of depth resolution to DOF has improved to 0.19%, which has a 44 times improvement compared to the theory ratio 8.4% of the FLFM system, demonstrating the effectiveness of our polarization data fusion method in achieving higher longitudinal resolution and more detailed reconstruction results.

Tables Icon

Table 2. Comparison of measurement errors for a 3D printed pattern.

The experiment results of the above two samples have shown that the proposed polarization data fusion method can be applied to increase resolving power and acquire more detailed information of rough and discontinuous scenes in FLFM measurements. The proposed method has higher 3D reconstruction accuracy and detail information extraction capability compared with Sergio's method. Thus, by introducing polarization into the FLFM system and applying the proposed data fusion scheme, we are able to get more detailed information of the sample surface, resulting in an enhanced resolution and higher quality 3D reconstruction results of the sample.

4. Conclusion

In summary, we have developed a versatile polarization-integrated FLFM system configuration and a data fusion method that enables simultaneous acquisition of both polarization and light-field image information with the same set of optic path and improvement in both resolution and reconstruction accuracy. By additionally introducing and fusing polarization information of the sample surface into the 3D volume reconstruction, the proposed system achieved a much better resolving power and improved acquisition ability to more detailed information of rough and discontinuous scenes. Experimental results on a Fourier light-field 3D imaging microscopy with a quartz capillary tube and a 3D printed pattern showed a significant improvement of vertical resolution with a depth resolution to DOF ratio of 0.19%, which has about 44 times improvement compared to the theory ratio before data fusion. Ability to access more detailed information of test samples with enhanced measurement accuracy is realized. Although current work can only wbe applied to polarization-sensitive samples, combing this versatile polarization-integrated FLFM system configuration and data fusion strategy, using designed parameters associated with optical imaging and inspection, as well as scenarios such as biology, material and industrial components, the future development of the system is expected to overcome the limitations imposed by traditional light-field microscope hardware configurations, enhance both horizontal and vertical resolution, and facilitate superior 3D measurement results in an more economical and practical manner.

Funding

National Natural Science Foundation of China (52075100).

Acknowledgments

The authors would like to thank Cong Xiong for providing test sample, Zhefeng Wang and Yidan Li for helping revise and touch up the manuscript.

Disclosures

The authors declare no conflicts of interest.

Data availability

The data that support the findings of this study are available upon reasonable request.

References

1. M. Levoy, R. Ng, A. Adams, et al., “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

2. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4d light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]  

3. M. Broxton, L. Grosenick, S. Yang, et al., “Wave optics theory and 3-d deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

4. B. Javidi, A. Carnicer, J. Arai, et al., “Roadmap on 3d integral imaging: Sensing, processing, and display,” Opt. Express 28(22), 32266–32293 (2020). [CrossRef]  

5. K. Kim, “Single-shot light-field microscopy: An emerging tool for 3d biomedical imaging,” BioChip J. 16(4), 397–408 (2022). [CrossRef]  

6. H. Li, C. Guo, D. Kim-Holzapfel, et al., “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2019). [CrossRef]  

7. D. Wang, Z. Zhu, Z. Xu, et al., “Neuroimaging with light field microscopy: a mini review of imaging systems,” Eur. Phys. J. Spec. Top. 231(4), 749–761 (2022). [CrossRef]  

8. M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

9. Z. Fu, Q. Geng, J. Chen, et al., “Light field microscopy based on structured light illumination,” Opt. Lett. 46(14), 3424–3427 (2021). [CrossRef]  

10. B. Xiong, T. Zhu, Y. Xiang, et al., “Mirror-enhanced scanning light-field microscopy for long-term high-speed 3d imaging with isotropic resolution,” Light: Sci. Appl. 10(1), 227 (2021). [CrossRef]  

11. J. Wu, Z. Lu, D. Jiang, et al., “Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3d subcellular dynamics at millisecond scale,” Cell 184(12), 3318–3332.e17 (2021). [CrossRef]  

12. N. Wagner, N. Norlin, J. Gierten, et al., “Instantaneous isotropic volumetric imaging of fast biological processes,” Nat. Methods 16(6), 497–500 (2019). [CrossRef]  

13. Z. Zhang, L. Bai, L. Cong, et al., “Imaging volumetric dynamics at high speed in mouse and zebrafish brain with confocal light field microscopy,” Nat. Biotechnol. 39(1), 74–83 (2021). [CrossRef]  

14. X. Hua, W. Liu, and S. Jia, “High-resolution fourier light-field microscopy for volumetric multi-color live-cell imaging,” Optica 8(5), 614–620 (2021). [CrossRef]  

15. L. Zhu, C. Yi, G. Li, et al., “Deep-learning based dual-view light-field microscopy enabling high-resolution 3d imaging of dense signals,” in Optics InfoBase Conference Papers, (2021).

16. A. Llavador, J. Sola-Pikabea, G. Saavedra, et al., “Resolution improvements in integral microscopy with fourier plane recording,” Opt. Express 24(18), 20792–20798 (2016). [CrossRef]  

17. Z. Zhang, L. Cong, L. Bai, et al., “Light-field microscopy for fast volumetric brain imaging,” J. Neurosci. Methods 352, 109083 (2021). [CrossRef]  

18. C. Yi, L. Zhu, D. Li, et al., “Light field microscopy in biological imaging,” J. Innov. Opt. Health Sci. 16(01), 2230017 (2023). [CrossRef]  

19. O. Bimber and D. Schedl, “Light-field microscopy: A review,” J. Neurol. Neuromedicine 4(1), 1–6 (2019). [CrossRef]  

20. G. Scrofani, J. Sola-Pikabea, A. Llavador, et al., “Fimic: Design for ultimate 3d-integral microscopy of in-vivo biological samples,” Biomed. Opt. Express 9(1), 335–346 (2018). [CrossRef]  

21. L. Galdón, G. Saavedra, J. Garcia-Sucerquia, et al., “Fourier lightfield microscopy: a practical design guide,” Appl. Opt. 61(10), 2558–2564 (2022). [CrossRef]  

22. L. Cong, Z. Wang, Y. Chai, et al., “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

23. Y. Zhang, Y. Wang, M. Wang, et al., “Multi-focus light-field microscopy for high-speed large-volume imaging,” PhotoniX 3(1), 30 (2022). [CrossRef]  

24. S. Moreschini, G. Scrofani, R. Brcgovic, et al., “Continuous refocusing for integral microscopy with fourier plane recording,” in 2018 26th European Signal Processing Conference (EUSIPCO), (2018), pp. 216–220.

25. Z. Lu, J. Wu, H. Qiao, et al., “Phase-space deconvolution for light field microscopy,” Opt. Express 27(13), 18131–18145 (2019). [CrossRef]  

26. F. L. Liu, G. Kuo, N. Antipa, et al., “Fourier diffuserscope: Single-shot 3d fourier light field microscopy with a diffuser,” Opt. Express 28(20), 28969–28986 (2020). [CrossRef]  

27. Y. Zhang, B. Xiong, Y. Zhang, et al., “Dilfm: an artifact-suppressed and noise-robust light-field microscopy through dictionary learning,” Light: Sci. Appl. 10(1), 152 (2021). [CrossRef]  

28. J. Rostan, N. Incardona, E. Sanchez-Ortiga, et al., “Machine learning-based view synthesis in fourier lightfield microscopy,” Sensors 22(9), 3487 (2022). [CrossRef]  

29. Y. Quéau, J.-D. Durou, and J.-F. Aujol, “Normal integration: A survey,” arXiv, arXiv:1709.05940 (2017). [CrossRef]  

30. Y. Feng, B. Xue, M. Liu, et al., “D2nt: A high-performing depth-to-normal translator,” arXiv, arXiv:2304.12031 (2023). [CrossRef]  

31. A. Kadambi, V. Taamazyan, B. Shi, et al., “Polarized 3d: Synthesis of polarization and depth cues for enhanced 3d sensing,” in SIGGRAPH 2015: Studio, (Association for Computing Machinery, New York, NY, USA, 2015), SIGGRAPH ‘15.

32. Z. Wang, L. Zhu, H. Zhang, et al., “Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning,” Nat. Methods 18(5), 551 (2021). [CrossRef]  

33. L. Zhu, C. Yi, and P. Fei, “A practical guide to deep-learning light-field microscopy for 3d imaging of biological dynamics,” STAR Protocols 4(1), 102078 (2023). [CrossRef]  

34. G. A. Atkinson and E. R. Hancock, “Surface reconstruction using polarization and photometric stereo,” Computer Analysis of Images and Patterns 4673, 466–473 (2007). [CrossRef]  

35. A. H. Mahmoud, M. T. El-Melegy, and A. A. Farag, “Direct method for shape recovery from polarization and shading,” in 2012 19th IEEE International Conference on Image Processing, (IEEE, 2012), pp. 1769–1772.

36. A. Kadambi, V. Taamazyan, B. Shi, et al., “Depth sensing using geometrically constrained polarization normals,” Int J. Comput. Vis. 125(1-3), 34–51 (2017). [CrossRef]  

37. Y. Jin-Fa, Y. Lei, Z. Hong-Ying, et al., “Shape from polarization of low-texture objects with rough depth information,” Hong wai yu hao mi bo xue bao 38, 819–827 (2019). [CrossRef]  

38. X. Tian, R. Liu, Z. Wang, et al., “High quality 3d reconstruction based on fusion of polarization imaging and binocular stereo vision,” Inf. Fusion 77, 19–28 (2022). [CrossRef]  

39. R. Liu, H. Liang, Z. Wang, et al., “Fusion-based high-quality polarization 3d reconstruction,” Opt. Lasers Eng. 162, 107397 (2023). [CrossRef]  

40. S. Nayar, K. Ikeuchi, and T. Kanade, “Surface reflection: physical and geometrical perspectives,” IEEE Trans. Pattern Anal. Machine Intell. 13(7), 611–634 (1991). [CrossRef]  

41. L. B. Wolff, “Polarization vision: a new sensory approach to image understanding,” Image vision computing 15(2), 81–93 (1997). [CrossRef]  

42. G. Li, Y. Li, K. Liu, et al., “Improving wavefront reconstruction accuracy by using integration equations with higher-order truncation errors in the southwell geometry,” J. Opt. Soc. Am. A 30(7), 1448–1459 (2013). [CrossRef]  

43. A. Stefanoiu, J. Page, P. Symvoulidis, et al., “Artifact-free deconvolution in light field microscopy,” Opt. Express 27(22), 31644–31666 (2019). [CrossRef]  

44. A. Stefanoiu, G. Scrofani, G. Saavedra, et al., “What about computational super-resolution in fluorescence fourier light field microscopy?” Opt. Express 28(11), 16554–16568 (2020). [CrossRef]  

45. A. Stefanoiu, G. Scrofani, G. Saavedra, et al., “3d deconvolution in fourier integral microscopy,” Proc. SPIE 11396, 1139601 (2020). [CrossRef]  

46. Intelligent Systems Lab Org. “Open3D: a modern library for 3D data processing,” v0.18Github, 2020, https://github.com/isl-org/Open3D

Data availability

The data that support the findings of this study are available upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Fourier light-field microscope: system setup (left); schematic of sub-images captured by the camera sensor (right).
Fig. 2.
Fig. 2. Data fusion: (a) normal vector and gradient of a surface microelement; (b) relationship between surface neighboring microelements.
Fig. 3.
Fig. 3. Data fusion process: depth reconstruction and resolution enhancement.
Fig. 4.
Fig. 4. Polarization-integrated FLFM experimental system setup.
Fig. 5.
Fig. 5. Images of quartz capillary tube: (a) microscopic measurement of the quartz capillary; (b) light-field image of the quartz capillary tube under polarization angle 0°.
Fig. 6.
Fig. 6. Polarization parameters of the quartz capillary tube: (a) polarization images; (b) degree of polarization (DoP) $\rho $; (c) azimuth of polarization (AoP) $\varphi $; (e) azimuthal angle $\alpha $; (f) zenith angle $\theta $; (d) and (g) normal vectors and its modulus.
Fig. 7.
Fig. 7. Measured results of the quartz capillary tube: (a) 3D result measured with a ZYGO white light interferometer; (b) 3D result reconstructed prior to data fusion; (c) 3D result reconstructed post data fusion; (d) 3D result reconstructed with shearlet-based light-field 3D reconstruction algorithm; (e) depth residual of (a) & (b); (f) depth residual of (a) & (c); (g) depth residual of (a) & (d).
Fig. 8.
Fig. 8. Images of a 3D printed pattern: (a) microscopic observation; (b) retrieved normal vectors by the proposed system.
Fig. 9.
Fig. 9. Measured results of the 3D printed pattern: (a) 3D result measured with a ZYGO white light interferometer; (b) 3D result reconstructed prior to data fusion; (c) 3D result reconstructed post data fusion; (d) 3D result reconstructed with shearlet-based light-field 3D reconstruction algorithm; (e) depth residual of (a) and (b); (f) depth residual of (a) and (c); (g) depth residual of (a) and (d).

Tables (2)

Tables Icon

Table 1. Comparison of measurement errors for a quartz capillary tube.

Tables Icon

Table 2. Comparison of measurement errors for a 3D printed pattern.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I ( θ p o l a r , φ ) = I m a x + I m i n 2 + I m a x I m i n 2 cos ( 2 θ p o l a r 2 φ )
S 0 = I 0 + I 45 + I 90 + I 135 2 S 1 = I 0 I 90 S 2 = I 45 I 135
ρ = S 1 2 + S 2 2 S 0 = ( I 0 I 90 ) 2 + ( I 45 I 135 ) 2 1 2 ( I 0 + I 45 + I 90 + I 135 )
φ = 1 2 tan 1 S 2 S 1 = 1 2 tan 1 I 45 I 135 I 0 I 90
n p = { z ( x , y ) x , z ( x , y ) y , 1 } = { tan θ cos α , tan θ sin α , 1 }
α = φ + π 2 o r φ + 3 π 2
ρ = 2 si n 2 θ cos θ n 2 sin 2 θ n 2 si n 2 θ n 2 si n 2 θ + 2 si n 4 θ
α = φ o r φ + π
ρ = ( n 1 / n ) 2 sin 2 θ 2 + 2 n 2 ( n + 1 / n ) 2 si n 2 θ + 4 cos θ n 2 sin 2 θ
{ S x ( m , n + 0.5 ) = S x ( m , n + 1 ) + S x ( m , n ) 2 = ϕ ( m , n + 1 ) ϕ ( m , n ) d S y ( m + 0.5 , n ) = S y ( m + 1 , n ) + S y ( m , n ) 2 = ϕ ( m + 1 , n ) ϕ ( m , n ) d
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.