Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

General phase-difference imaging of incoherent digital holography

Open Access Open Access

Abstract

The hologram formed by incoherent holography based on self-interference should preserve the phase difference information of the object, such as the phase difference between the mutually orthogonal polarizations of anisotropic object. How to decode this phase difference from this incoherent hologram, i.e., phase-difference imaging, is of great significance for studying the properties of the measured object. However, there is no general phase-difference imaging theory due to both diverse incoherent holography systems and the complicated reconstruction process from holograms based on the diffraction theory. To realize phase-difference image in incoherent holography, the relationship between the phase difference of the object and the image reconstructed by holograms is derived using a general physical model of incoherent holographic systems, and then the additional phase that will distort this relationship in actual holographic systems is analyzed and eliminated. Finally, the phase-difference imaging that is suitable for the most incoherent holographic systems is realized and the general theory is experimentally verified. This technology can be applied to phase-difference imaging of anisotropic objects, and has potential applications in materials science, biomedicine, polarized optics and other fields.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Holography is well-known for its three-dimensional imaging and phase imaging [14], and plays an important role in different applications such as industrial inspection, material science, and biomedicine [58]. As a branch of holography, incoherent holography generates holograms based on the self-interference of incoherent light [912], which solves the inherent problem of relying on coherent light in holography. Compared with the holography relying on coherent light (coherent holography for short), incoherent holography is not only suitable for objects illuminated by natural light or self-illuminating [1316], but also has high-resolution and high-quality imaging [1720], and is a promising three-dimensional imaging technology [2126].

In incoherent holographic systems, the light emitted from each object point is divided into two parts and passes through two optical systems with equal optical paths but different light modulations, and finally interferes to form a hologram [2730]. The most concise and classic incoherent holographic system is the FINCH (Fresnel Incoherent Correlation Holography) system, as shown in Fig. 1. The light emitted from original object point ${u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ splitted by the spatial light modulator (SLM) can be regarded as two sub-light fields emitted from two sub-object points ${u_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ and ${u_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ formed by the polarization decomposition of ${u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$. These two sub-light fields pass through the horizontal polarization subsystem (marked in blue) and vertical polarization subsystem (marked in red) of the FINCH system respectively. Since the optical path difference between the two subsystem is small enough, and the temporal varying random phases of these two sub-object points are consistent, the two light fields on the camera can interfere with each other. The intensity superposition of the interference patterns formed by each original object point constitutes a hologram. When interference caused by two sub-object points occurs, the temporal varying random phases will cancel each other out, and the stable phase difference between the two sub-object points (termed as the phase difference for short) will be preserved in the interferogram, that is, the phase difference will be preserved in the hologram and can theoretically be recovered from the hologram to achieve the phase-difference imaging of incoherent holography.

 figure: Fig. 1.

Fig. 1. The schematic of the FINCH system. P is a polarizer, L is a lens, BS is a beam splitter, SLM is a spatial light modulator, and CCD is a camera.

Download Full Size | PDF

According to the above analysis, phase-difference imaging should be an imaging technology suitable for various incoherent holographic systems, that is, like the well-known intensity imaging, it is a general imaging technology for incoherent holography. However, there is no general phase-difference imaging theory due to both diverse incoherent holography systems and the complicated reconstruction process from holograms based on the diffraction theory. To the best of our knowledge, only FINCH achieves phase-difference imaging among all incoherent holographic systems, which is reported in our previous works [31,32]. Since the FINCH system is one special incoherent holographic system, the phase of the reconstructed image only needs to eliminate constant phases and the accurate phase-difference imaging is achieved, as reported in Ref. [31]. For other incoherent holographic systems, there are constant phase, linear phase, and quadratic phase need to be eliminated generally, and the accurate phase-difference imaging can be achieved, which will be discussed in this paper.

In this paper, the general physical model of incoherent holographic systems is summarized and used to reveal the general relationship between the phase difference and the phase of the reconstructed image, then the method of eliminating the additional phase generated by actual holographic systems is introduced, and finally the general method of phase-difference imaging is proposed. This method can be used for most known incoherent holographic systems, such as those in Refs. [12,27,3335] and others. We call this technique as general Phase-Difference Imaging of Incoherent Digital Holography, or PDI-IDH for short.

With the help of PDI-IDH, incoherent holography may have potential applications in materials science, biomedicine, and polarized optics, because the phase difference is of great significance for studying the optical properties of anisotropic objects such as birefringence materials, biological tissue, and vectorial light converter.

2. Methodology

2.1 Analytical expression of reconstructed image

Incoherent holographic systems decompose the object point ${u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ on the object plane (x0, y0) into two separate points that finally form two coherent light fields ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ and ${U_2}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ in the camera plane (xc, yc). Light intensity $|{U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0}) + {U_2}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0}){|^2}$ generated by all object points superimposed with each other to form a hologram

$$I\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c}} \right) = \int\!\!\!\int {{{\left|{{U_1}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right) + {U_2}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right)} \right|}^2}} d{x_0}d{y_0}. $$

The component $H({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$

$$H\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c}} \right) = \int\!\!\!\int {{U_1}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right)U_2^ \ast \left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right)} d{x_0}d{y_0}$$
can be separated from the hologram $I({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ using off-axis or phase-shifting techniques. The reconstructed image ${a_i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})\exp [j{\phi _i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})]$ on the image plane (Xi, Yi) can be calculated from $H({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ by diffraction calculation [12], where ${a_i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ and ${\phi _i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ are the amplitude and phase of the reconstructed image.

Although the optical systems of various on-axis incoherent holograms are different, their physical process of recording holograms is the same. Most on-axis systems decompose one object point into two separate points, and perform lens imaging in succession, then finally form two mutually interfering spherical waves on the camera plane. This is the general physical model of on-axis incoherent holographic systems. Therefore, only the general expression for two spherical waves ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ and ${U_2}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ is needed, and then the expression of $H({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ for most on-axis systems can be calculated, and finally the general expression of the image ${a_i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})\exp [j{\phi _i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})]$ will be obtained, which is the analytical expression of the reconstructed image.

Based on this physical model of multi-lens imaging, we start to calculate the general expression of spherical wave in the camera plane.

The object-image relationship of lens imaging is shown in Fig. 2, where the ordinates of the object point ${u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$, the lens L1 with a focal length of f1, and the image point ${u_1}\textrm{(}{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _1}\textrm{)}$ are zo1, 0 and zi1= (1/f1 + 1/zo1)-1, respectively. When the Fresnel approximation condition holds, the exact relationship between ${u_1}\textrm{(}{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _1}\textrm{)}$ and ${u_0}\textrm{(}{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0}\textrm{)}$ is (refer to the appendix of [32])

$${u_1}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_1}} \right) = \left[ {A_1^{ - 1}{e^{jk{D_1}}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} ){b_1}}}} \right]{u_0}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right),$$
where k is the wave vector, A1= zi1/zo1, D1 = zi1-zo1, b1 = (A1-1)/zo1, x1 = A1×0, y1 = A1y0. So, the complex amplitude of the image point after passing through n lenses can be written as
$$\begin{aligned} {u_n}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_n}} \right) &= \left[ {\prod\limits_{j = 1}^n {A_j^{ - 1}{e^{jk{D_j}}}{e^{j\frac{k}{2}({x_{j - 1}^2 + y_{j - 1}^2} ){b_j}}}} } \right]{u_0}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right)\\ &= A_s^{ - 1}(n ){e^{jk{D_s}(n )}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} ){b_s}(n )}}{u_0}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right), \end{aligned}$$
where
$${A_s}(n )= \prod\limits_{j = 0}^n {{A_j}} ,\;\textrm{where}\;{A_j} = \left\{ {\begin{array}{cc} {{{{z_{ij}}} / {{z_{oj}},\quad j > 0}}}\\ {1,\quad j = 0} \end{array}} \right.\;, $$
$${D_s}(n )= \sum\limits_{j = 1}^n {{D_j}} ,\;\textrm{where}\;{D_j} = {z_{ij}} - {z_{oj}}, $$
$${b_s}(n )= \sum\limits_{j = 1}^n {{{[{{A_s}({j - 1} )} ]}^2}{b_j}} ,\;\textrm{where}\;{b_j} = ({{A_j} - 1} )z_{oj}^{ - 1}. $$

 figure: Fig. 2.

Fig. 2. Schematic diagram of lens imaging

Download Full Size | PDF

The subscript “j” corresponds to the jth lens imaging. The image point coordinate is xn = As(n)x0 and yn = As(n)y0, where As(n) is the magnification of image after the nth lens imaging. The object point ${u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ with illumination infinitesimal element dx0dy0 will generate the image point ${u_n}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _n})$ with the district $d{x_n}d{y_n} = A_s^2(n)d{x_0}d{y_0}$. In the camera plane (xc, yc), the spherical wave generated by image point ${u_n}\textrm{(}{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _n}\textrm{)}$ is $A_s^2(n){u_n}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _n})S({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _n},z)$, where $S({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _n},z)$ represents the spherical wave in the (xc, yc, 0) plane generated by a unit point source located at (xn, yn, z).

$$\begin{aligned} S\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_n},z} \right) &= \frac{{{e^{jk\sqrt {{{({{x_c} - {x_n}} )}^2} + {{({{y_c} - {y_n}} )}^2} + {{({0 - z} )}^2}} }}}}{{\sqrt {{{({{x_c} - {x_n}} )}^2} + {{({{y_c} - {y_n}} )}^2} + {{({0 - z} )}^2}} }}\\ &\approx \frac{{{e^{jk( - z)}}}}{{( - z)}}{e^{j\frac{k}{{2( - z)}}[{{{({{x_c} - {x_n}} )}^2} + {{({{y_c} - {y_n}} )}^2}} ]}} \end{aligned}, $$
where the propagation distance is 0-z = -z.

In incoherent holographic systems, the light fields ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ and ${U_2}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ in camera plane of image points ${u_{n1}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n1}})$ and ${u_{n2}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n2}})$, which are formed by ${u_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ and ${u_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ decomposed by the same object point ${u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$, can be written as

$${U_1}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right) = A_{s1}^2({{n_1}} ){u_{n1}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n1}}} \right)S\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n1}},{z_1}} \right), $$
$${U_2}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right) = A_{s2}^2({{n_2}} ){u_{n2}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n2}}} \right)S\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n2}},{z_2}} \right). $$

To obtain ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})U_2^ \ast ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$, we calculate ${u_{n1}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n1}})u_{n2}^ \ast ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n2}})$ and $S({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n1}},{z_1}){S^ \ast }({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n2}},{z_2})$ separately,

$$\begin{aligned} {u_{n1}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n1}}} \right)u_{n2}^ \ast \left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n2}}} \right) &= \left[ {A_{s1}^{ - 1}{e^{jk{D_{s1}}}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} ){b_{s1}}}}{u_{01}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right)} \right]{\left[ {A_{s2}^{ - 1}{e^{jk{D_{s2}}}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} ){b_{s2}}}}{u_{02}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right)} \right]^ \ast }\\ &= {({{A_{s1}}{A_{s2}}} )^{ - 1}}{e^{jk({{D_{s1}} - {D_{s2}}} )}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} )({{b_{s1}} - {b_{s2}}} )}}{a_{01}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right){a_{02}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right){e^{j\Delta \varphi \left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right)}} \end{aligned}, $$
where ${a_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0}){a_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0}){e^{j\Delta \varphi ({{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0})}} = {u_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})u_{02}^ \ast ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$, ${a_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$, ${a_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ and $\Delta \varphi ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ are the amplitude and phase difference of ${u_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ and ${u_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$, respectively. For brevity, “(n1)” and “(n2)” are omitted in Eq. (11).
$$\begin{aligned} &S\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n1}},{z_1}} \right){S^ \ast }\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n2}},{z_2}} \right)\\ &= \frac{{{e^{jk( - {z_1})}}}}{{( - {z_1})}}{e^{j\frac{k}{{2( - {z_1})}}[{{{({{x_c} - {A_{s1}}{x_0}} )}^2} + {{({{y_c} - {A_{s1}}{y_0}} )}^2}} ]}}\frac{{{e^{jk{z_2}}}}}{{( - {z_2})}}{e^{j\frac{k}{{2{z_2}}}[{{{({{x_c} - {A_{s2}}{x_0}} )}^2} + {{({{y_c} - {A_{s2}}{y_0}} )}^2}} ]}}\\ &= \frac{{{e^{jk({{z_2} - {z_1}} )}}}}{{{z_1}{z_2}}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} )\frac{{{{({{A_{s2}} - {A_{s1}}} )}^2}}}{{{z_2} - {z_1}}}}}{e^{j\frac{k}{{2( - {Z_i})}}[{{{({{x_c} - {X_i}} )}^2} + {{({{y_c} - {Y_i}} )}^2}} ]}} \end{aligned}, $$
where ${X_i} = {A_i}{x_0}$, ${Y_i} = {A_i}{y_0}$,
$${Z_i} = \frac{{{z_2}{z_1}}}{{{z_2} - {z_1}}}, $$
$${A_i} = \frac{{{z_2}{A_{s1}} - {z_1}{A_{s2}}}}{{{z_2} - {z_1}}}. $$

Equation (12) can be written in the form of spherical wave,

$$S\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n1}},{z_1}} \right){S^ \ast }\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_{n2}},{z_2}} \right) = \left[ {\frac{{{e^{jk({{z_2} - {z_1} + {Z_i}} )}}}}{{ - ({z_2} - {z_1})}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} )\frac{{{{({{A_{s2}} - {A_{s1}}} )}^2}}}{{{z_2} - {z_1}}}}}} \right]S(\mathop{r_c}\limits^{\rightharpoonup};\mathop{R_i}\limits^{\rightharpoonup},{Z_i}). $$

Substitute Eq. (11) and (15) into ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})U_2^ \ast ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ to get

$${U_1}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right)U_2^ \ast \left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c};{x_0},{y_0}} \right) = {({j\lambda } )^{ - 1}}A_i^2{a_i}(\mathop{R_i}\limits^{\rightharpoonup} ){e^{j{\phi _i}(\mathop{R_i}\limits^{\rightharpoonup} )}}S({\mathop{r_c}\limits^{\rightharpoonup} ;\mathop{R_i}\limits^{\rightharpoonup},{Z_i}} ), $$
where
$${a_i}(\mathop{R_i}\limits^{\rightharpoonup} )= A_i^{ - 2}\lambda |{{A_{s1}}{A_{s2}}{{({z_2} - {z_1})}^{ - 1}}} |{a_{01}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right){a_{02}}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right), $$
$$\begin{aligned} {e^{j{\phi _i}(\mathop{R_i}\limits^{\rightharpoonup} )}} &= \textrm{sign}[{A_{s1}}{A_{s2}}({z_2} - {z_1})]({ - j} ){e^{jk[{({{D_{s1}} - {D_{s2}} + {z_2} - {z_1}} )+ {Z_i}} ]}}{e^{j\frac{k}{2}({x_0^2 + y_0^2} )B}}{e^{j\Delta \varphi \left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right)}}\\ &= \textrm{sign}[{A_{s1}}{A_{s2}}({z_2} - {z_1})]{e^{j\left( {k{Z_i} - \frac{\pi }{2}} \right)}}{e^{j\frac{k}{2}({X_i^2 + Y_i^2} )\frac{B}{{A_i^2}}}}{e^{j\Delta \varphi \left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_0}} \right)}} \end{aligned}, $$
$$B = {b_{s1}} - {b_{s2}} + {({{A_{s2}} - {A_{s1}}} )^2}{({{z_2} - {z_1}} )^{ - 1}}. $$

Because the amplitude is non-negative, the absolute value sign is added to As1As2(z2-z1)-1 in Eq. (17). The “sign()” in Eq. (18) is a sign function. The sign[As1As2(z2-z1)] indicates that π needs to be added to the phase when As1As2(z2-z1) is negative, that is As1As2(z2-z1)-1 is negative. Equation (6) indicates that Ds1 is the distance from the object point ${u_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ to the image point ${u_{n1}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n1}})$. -z1 is the diffraction distance of the spherical wave defined by Eq. (8), that is, the distance from the image point ${u_{n1}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _{n1}})$ to the camera plane. Therefore, Ds1-z1 is the distance from the object point ${u_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ to the camera plane. Similarly, Ds2-z2 is the distance from the object point ${u_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ to the camera plane. For incoherent holographic systems, the distance between ${u_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ and camera is required to be equal to that between ${u_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ and camera, so that ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ and ${U_2}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ can interfere with each other. Therefore, Ds1-Ds2 + z2-z1 is equal to 0 in Eq. (18). Considering -j, the constant phase is exp[j(kZi-π/2)].

Substituting Eq. (16) into Eq. (2), the diffraction relationship is obtained as

$$H\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c}} \right) = \frac{{{e^{jk({ - {Z_i}} )}}}}{{j\lambda ({ - {Z_i}} )}}\int\!\!\!\int {[{{a_i}(\mathop{R_i}\limits^{\rightharpoonup} ){e^{j{\phi_i}(\mathop{R_i}\limits^{\rightharpoonup} )}}} ]{e^{j\frac{k}{{2({ - {Z_i}} )}}[{{{({{x_c} - {X_i}} )}^2} + {{({{y_c} - {Y_i}} )}^2}} ]}}} d{X_i}d{Y_i}. $$

Since ${a_{01}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ and ${a_{02}}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ in amplitude ${a_i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ (see Eq. (17)) are the two amplitude components of the object point ${u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ (see Eq. (11)), ${a_i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ is proportional to $|{u_0}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0}){|^2}$. So, ${a_i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})\exp [j{\phi _i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})]$ is the reconstructed image we need. By performing diffraction calculation on $H\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c}} \right)$,

$${a_i}(\mathop{R_i}\limits^{\rightharpoonup} ){e^{j{\phi _i}(\mathop{R_i}\limits^{\rightharpoonup} )}} = \frac{{{e^{jk{Z_i}}}}}{{j\lambda {Z_i}}}\int\!\!\!\int {H\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c}} \right){e^{j\frac{k}{{2{Z_i}}}[{{{({{X_i} - {x_c}} )}^2} + {{({{Y_i} - {y_c}} )}^2}} ]}}} d{x_c}d{y_c}. $$

Although Eq. (21) is the well-known traditional reconstruction method, the analytical expression of the image, namely Eqs. (17) and (18), are accurately derived for the first time. The general relationship between the phase ${\phi _i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ of the image and the phase difference $\Delta \varphi ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ is shown in Eq. (18).

In the above discussion, we only analyzed the hologram formed by a single object plane. When considering multiple object planes, the hologram ${I_s}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ is the intensity superposition of holograms formed by all object planes. Using Eq. (21), the hologram ${I_s}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ can be focused on different object planes with different reconstruction distances, that is, three-dimensional intensity imaging can be achieved by the amplitude ${a_i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ of the reconstructed image. Then, the constant and quadratic phase terms related to the object distance in Eq. (18) can be calculated and eliminated from the phase ${\phi _i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ of the reconstructed image. Finally, the phase difference $\Delta \varphi ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ of objects located on different planes can be obtained, that is, three-dimensional phase-difference imaging can be achieved.

Considering the calculation process of parameters such as Zi, B, Ai in Eqs. (21) and (18) is troublesome, we compiled the above theory into a general algorithm, as we show in Code 1 (Ref. [36]). By inputting the parameters of the holographic system into the algorithm, the user can directly obtain Zi, B, Ai, and the sign of As1As2(z2-z1), which will be demonstrated in our experimental section.

2.2 Additional phase of actual systems

The above analysis is based on the ideal on-axial system, but there is inevitable deviation of the actual system from the ideal system, which will change the phase of spherical waves ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ and ${U_2}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$. Therefore, the $H({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ in ideal system will become to the $H^{\prime}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ in actual systems, and an additional phase ${\phi _c}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ between $H({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ and $H^{\prime}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ is inevitable. ${\phi _c}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ will be transferred to image after diffraction calculation of Eq. (21), which will distort the relationship between the phase ${\phi _i}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over R} _i})$ of the image and the phase difference $\Delta \varphi ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ in Eq. (18). Therefore, ${\phi _c}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ needs to be eliminated before reconstruction to make Eq. (18) suitable for actual on-axial systems.

Off-axial systems can be regarded as on-axis systems with additional phases, where the linear phase in the additional phase can achieve the off-axis effect. Therefore, after eliminating the additional phase ${\phi _c}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$, Eq. (18) is also suitable for actual off-axial systems.

We express the additional phase in the form of a series expansion, that is

$${\phi _c}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c}) = \exp [jk\sum\limits_{p = 0}^\infty {({\alpha _p}x_c^p)} ]\exp [jk\sum\limits_{q = 0}^\infty {({\beta _q}y_c^q)} ], $$
where $k\sum {({\alpha _p}x_c^p)}$ and $k\sum {({\beta _q}y_c^q)}$ represent the phase in the x and y directions respectively, integers p and q represent the order, αp and βq are coefficients, and xc and yc are the coordinate of the camera plane.

The coherence length of incoherent light is generally only tens of microns, and the changes of ${U_1}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ and ${U_2}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c};{x_0},{y_0})$ cannot be too large in a carefully constructed holographic system. Therefore, the values of αp and βq should be small. In addition, the photosensitive area of the camera is generally less than 2 × 2 cm2, the values of xc and yc are small, too. Therefore, the multi-power terms (p, q ≥ 2) of xc and yc in Eq. (22) are high-order small quantities and can be ignored. Keeping only the terms of p = 0,1 and q = 0,1 in Eq. (22), ${\phi _c}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c})$ can be simplified to

$${\phi _c}({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _c}) = {e^{\; j{\varphi _c}}}{e^{jk({\alpha {x_c} + \beta {y_c}} )}}, $$
where φc and k(αxc +βyc) are a constant phase and a linear phase.

We compensate the additional phase before image reconstruction and modify the reconstruction formula (21) to

$${a_i}(\mathop{R_i}\limits^{\rightharpoonup} ){e^{j{\phi _i}(\mathop{R_i}\limits^{\rightharpoonup} )}} = \frac{{{e^{jk{Z_i}}}}}{{j\lambda {Z_i}}}\int\!\!\!\int {\left[ {H^{\prime}\left( {{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} }_c}} \right){e^{ - j{\varphi_c}}}{e^{ - jk(\alpha {x_c} + \beta {y_c})}}} \right]{e^{j\frac{k}{{2{Z_i}}}[{{{({{X_i} - {x_c}} )}^2} + {{({{Y_i} - {y_c}} )}^2}} ]}}} d{x_c}d{y_c}. $$

Since φc and k(αxc +βyc) are formed by actual systems, experiments are required to calibrate them. Therefore, the calibration method will be explained in the experimental section.

Generally, for a stable incoherent holographic system, the constant phase φc and the linear phase k(αxc +βyc) are stable and only need to be calibrated once.

3 Experiments

3.1 Incoherent digital holographic system

To verify that the phase-difference imaging method is generally suitable for various incoherent holographic systems, the experimental system should not only have multiple lenses, but also need to generate significant additional phase. Therefore, the triangular interference system shown in Fig. 3 was selected, which not only has multiple lenses, but also forms an off-axis hologram with a large additional phase.

 figure: Fig. 3.

Fig. 3. (a) The schematic diagram of the triangle interferometer system. (b) The simplified schematic diagram for analysis. The green solid line represents the horizontally polarized component, and the red dashed line represents the vertically polarized component. L1, L2 and L3 are lenses, P1 and P2 are polarizers, PBS is a polarizing beam splitter, and M1 and M2 are mirrors.

Download Full Size | PDF

Figure 3(a) is a schematic diagram of the triangular interferometric system, and Fig. 3(b) is a simplified schematic diagram for analysis. LED (GCI-060403, Daheng Optics) with small light-emitting area and stationary ground glass construct an incoherent light source with large light-emitting area for improving imaging quality. The illumination light illuminates the object after passing through the polarizer P1 whose polarization direction is 45° from the horizontal. Object beams are converged by the lens L1, and continues to propagate through a triangle interferometer system, which is composed by polarizing beam splitter (PBS), lens L2 and L3, and mirrors M1 and M2. The transmission and reflection beams are shown as green solid and red dashed line, respectively. The two beams are interfering with each other after passing through polarizer P2, and the hologram are captured by the CCD (Grasshopper3 GS3-U3-51S5 M, Point Grey).

The distances and focal lengths of lenses in Fig. 3(b) are shown in Table 1, where Δd is the distance correction due to the refractive index difference between PBS and air to ensure that the imaging position can be calculated correctly (refer to the Supplement 1 of [31]). The Δd is equal to [(1/np)-1]h, where np = 1.7174 and h = 25.4 mm are the refractive index and thickness of PBS, respectively.

Tables Icon

Table 1. The distance parameters in Fig. 3(b)

3.2 Calibration of φc and k(αxc+βyc)

To determine the additional phase generated by the actual holographic system, the light-transmitting circular hole is used as the object. Since the illumination light is a linearly polarized light inclined at 45°, the phase difference between the horizontal and vertical polarization components of the object light is zero, and the distribution of the phase difference is uniform.

Let α=β=φc = 0, we use Eqs. (24) and (18) to reconstruct the object and calculate the phase difference, where the parameters required in Eqs. (24) and (18) can be obtained by our general algorithm [36]. The parameters input to the algorithm include the object distance do = 37 mm, all distances and focal lengths in the holographic system (see Table 1). The output of the algorithm is Zi = 0.289 m, B = 1.873 m-1, Ai = -0.780 and the sign[As1As2(z2-z1)]=+1. Figures 4(a) and 4(b) are the amplitude and phase of the reconstructed image respectively, where zero-order frequency spectrum is suppressed. The messy phase distribution in Fig. 4(b) indicates that the phase of the reconstructed image is not suitable for imaging objects. Figure 4(c) is the distribution of phase difference. Affected by the additional phase (including constant phase and linear phase) generated by the actual system, the phase difference changes linearly.

 figure: Fig. 4.

Fig. 4. (a) and (b) are the amplitude and phase of the reconstructed image. (c) is the phase difference of the object, where the additional phases (constant phase and linear phase generated by the actual system) are not calibrated.

Download Full Size | PDF

We keep β=φc = 0, and set α equal to −0.001, −0.002 and −0.003 in sequence, then the phase difference is obtained as shown in Figs. 5(a)–5(c). From Figs. 5(a)–5(c), the relationship between α and the change direction of the phase difference is monotonic. Therefore, we can easily find α=-0.0043 that makes the phase difference only change in the vertical direction as shown in Fig. 5(d). In the same way, we can obtain β=0.0059 where the phase difference only changes in the horizontal direction as shown in Fig. 5(e). Substituting α=-0.0043, β=0.0059 and φc = 0 into Eqs. (24) and (18), the obtained phase difference is shown in Fig. 5(f).

 figure: Fig. 5.

Fig. 5. (a), (b) and (c) are the phase differences of the object obtained when α is −0.001, −0.002 and −0.003 respectively, where β=φc = 0. (d) is the phase difference of the object obtained when α=−0.0043 and β=φc = 0. (e) is the phase difference of the object obtained when β=0.0059 and α=φc = 0. (f) is the phase difference of the object obtained when α=-0.0043, β=0.0059 and φc = 0.

Download Full Size | PDF

The phase average of the region (20 × 20 pixels) marked with the white rectangle in Fig. 5(f) is −1.55 rad. Since the phase difference $\Delta \varphi ({\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} _0})$ between the horizontal and vertical polarization components of the linearly polarized light should be 0, −1.55 rad is the constant phase φc of the actual system.

The above process is the calibration method of φc and k(αxc+βyc) for our holographic system. In subsequent experiments, phase-difference imaging will all use φc = -1.55 rad, α=-0.0043 and β=0.0059.

3.3 Phase-difference imaging

To construct the object light with non-uniform phase difference, a radial polarization converter (RPC-515-08-515, Workshop of Photonics) and a quarter-wave plate are placed behind the light-transmitting circular hole. The linearly polarized illumination light passes through the radial polarization converter to form radially polarized light. After the radially polarized light passes through a quarter-wave plate whose optical axis is 45° from the horizontal, elliptically polarized light is formed. The intensity distribution of the elliptically polarized light is shown in Fig. 6(a), and the phase difference between the horizontal and vertical polarization directions is shown in Fig. 6(b). Figures 6(c) and 6(d) are the amplitude and phase difference of the object reconstructed by Eqs. (24) and (18). Figure 6(d) is basically the same as that in Fig. 6(b), which verify that the proposed PDI-IDH can achieve phase-difference imaging of anisotropic objects.

 figure: Fig. 6.

Fig. 6. (a) and (b) are the theoretical distributions of the intensity and phase difference of the object. (c) and (d) are the amplitude and phase difference of the object obtained experimentally.

Download Full Size | PDF

To demonstrate the characteristics of three-dimensional imaging, in addition to the circular hole with non-uniform phase difference, the light-transmitting rectangular hole is also added in the experiment. The linearly polarized light passes through the rectangular hole to form an object light with uniform phase difference. When focusing on the circular hole, the amplitude and phase difference distributions of the objects are shown in Figs. 7(a) and 7(b) respectively. When focusing on the rectangular hole, the amplitude and phase difference distributions of the objects are shown in Figs. 7(c) and 7(d) respectively. The results in Fig. 7 demonstrate that the phase-difference imaging has the same three-dimensional imaging capabilities as intensity imaging.

 figure: Fig. 7.

Fig. 7. (a) and (b) are the intensity and phase difference distributions of the objects when focusing on the circular object. (c) and (d) are the intensity and phase difference distributions of the objects when focusing on the rectangular object.

Download Full Size | PDF

The above experimental results verify that the proposed PDI-IDH can achieve three-dimensional phase-difference imaging and is suitable for both anisotropic objects (the circular hole) and isotropic objects (the rectangular hole). In addition, the phase-difference imaging can be used to measure the polarization distribution of polarized light, and the method can refer to our previous work [32].

In actual experiments, there are two reasons that may cause a slight distortion of phase difference. First, the effective numerical aperture of the actual recording system is not large enough. Because the numerical aperture is infinite in theory, but it is limited in experiments. Second, the constant phase φc in Eq. (24) generated by the actual system may change due to changes in the experimental environment. Because environmental factors such as temperature and air flow will change the refractive index distribution of the air, thereby changing the optical path length and causing phase changes.

4. Conclusion

We summarize a physical model according to the commonality of incoherent holographic systems based on the self-interference, and derive a general relationship between the phase of the reconstructed image and the phase difference of the object. Considering that the additional phase generated by actual holographic systems will distort this relationship, the additional phase is analyzed and the elimination method is introduced to make the relationship in actual systems consistent with that in ideal systems. Finally, the phase-difference imaging suitable for most incoherent holographic systems is proposed, and a general algorithm for quickly calculating the parameters required for the phase-difference imaging is provided [36]. To verify our theory, an off-axis triangulation interferometry system was selected, calibrated only once, and then phase-difference imaging was achieved with a single exposure. The anisotropic properties of objects are related to the phase difference between two orthogonal polarization components, which can be obtained by the phase-difference imaging. With the help of phase-difference imaging, incoherent holography should have potential applications in materials science, biomedicine, polarized optics and other fields.

Funding

National Natural Science Foundation of China (12274224, 62105146); Fundamental Research Funds for the Central Universities (NQ2023013, NS2022079); Natural Science Foundation of Jiangsu Province (BK20210290).

Disclosures

The authors declare no conflicts of interest.

Data availability

The general algorithm is available in Code 1 [36]. Other data are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Gabor, “A New Microscopic Principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. E. N. Leith and J. Upatnieks, “Reconstructed Wavefronts and Communication Theory,” J. Opt. Soc. Am. 52(10), 1123–1130 (1962). [CrossRef]  

3. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1(4), 153–156 (1969). [CrossRef]  

4. B. Javidi, A. Carnicer, A. Anand, et al., “Roadmap on digital holography Invited,” Opt. Express 29(22), 35078–35118 (2021). [CrossRef]  

5. M. Tang, H. He, and L. K. Yu, “Real-time 3D imaging of ocean algae with crosstalk suppressed single-shot digital holographic microscopy,” Biomed. Opt. Express 13(8), 4455–4467 (2022). [CrossRef]  

6. M. Valentino, J. Behal, V. Bianco, et al., “Intelligent polarization-sensitive holographic flow-cytometer: Towards specificity in classifying natural and microplastic fibers,” Sci. Total Environ. 815, 152708 (2022). [CrossRef]  

7. T. R. Liu, Y. Z. Li, H. C. Koydemir, et al., “Rapid and stain-free quantification of viral plaque via lens-free holography and deep learning,” Nat. Biomed. Eng. 7(8), 1040–1052 (2023). [CrossRef]  

8. T. Li, Q. H. Song, G. J. He, et al., “A Method for Detecting the Vacuum Degree of Vacuum Glass Based on Digital Holography,” Sensors 23(5), 2468 (2023). [CrossRef]  

9. A. W. Lohmann, “Wavefront Reconstruction for Incoherent Objects,” J. Opt. Soc. Am. 55(11), 1555–1556 (1965). [CrossRef]  

10. G. W. Stroke and R. C. Restrick, “Holography with Spatially Noncoherent Light,” Appl. Phys. Lett. 7(9), 229–231 (1965). [CrossRef]  

11. G. Cochran, “New Method of Making Fresnel Transforms with Incoherent Light,” J. Opt. Soc. Am. 56(11), 1513–1517 (1966). [CrossRef]  

12. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]  

13. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2(3), 190–195 (2008). [CrossRef]  

14. M. K. Kim, “Full color natural light holographic camera,” Opt. Express 21(8), 9636–9642 (2013). [CrossRef]  

15. K. Choi, J. Yim, and S.-W. Min, “Achromatic phase shifting self-interference incoherent digital holography using linear polarizer and geometric phase lens,” Opt. Express 26(13), 16212–16225 (2018). [CrossRef]  

16. T. Hara, T. Tahara, Y. Ichihashi, et al., “Multiwavelength-multiplexed phase-shifting incoherent color digital holography,” Opt. Express 28(7), 10078–10089 (2020). [CrossRef]  

17. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011). [CrossRef]  

18. F. Ma, Y. Li, X. Wang, et al., “Investigation of the effective aperture: towards high-resolution Fresnel incoherent correlation holography,” Opt. Express 29(20), 31549–31560 (2021). [CrossRef]  

19. N. Siegel and G. Brooker, “Single shot holographic super-resolution microscopy,” Opt. Express 29(11), 15953–15968 (2021). [CrossRef]  

20. T. Tahara, T. Ito, Y. Ichihashi, et al., “Multiwavelength three-dimensional microscopy with spatially incoherent light, based on computational coherent superposition,” Opt. Lett. 45(9), 2482–2485 (2020). [CrossRef]  

21. T. Tahara, Y. Kozawa, A. Ishii, et al., “Two-step phase-shifting interferometry for self-interference digital holography,” Opt. Lett. 46(3), 669–672 (2021). [CrossRef]  

22. P. Jeon, J. Kim, H. Lee, et al., “Comparative study on resolution enhancements in fluorescence-structured illumination Fresnel incoherent correlation holography,” Opt. Express 29(6), 9231–9241 (2021). [CrossRef]  

23. M. Potcoava, C. Mann, J. Art, et al., “Spatio-temporal performance in an incoherent holography lattice light-sheet microscope (IHLLS),” Opt. Express 29(15), 23888–23901 (2021). [CrossRef]  

24. T. Nobukawa, M. Maezawa, Y. Katano, et al., “Transformation of coherence-dependent bokeh for incoherent digital holography,” Opt. Lett. 47(11), 2774–2777 (2022). [CrossRef]  

25. P. Wu, D. J. Zhang, J. Yuan, et al., “Large depth-of-field fluorescence microscopy based on deep learning supported by Fresnel incoherent correlation holography,” Opt. Express 30(4), 5177–5191 (2022). [CrossRef]  

26. H. Yu, Y. Kim, D. Yang, et al., “Deep learning-based incoherent holographic camera enabling acquisition of real-world holograms for holographic streaming system,” Nat. Commun. 14(1), 3534 (2023). [CrossRef]  

27. Y. Wan, T. Man, and D. Wang, “Incoherent off-axis Fourier triangular color holography,” Opt. Express 22(7), 8565–8573 (2014). [CrossRef]  

28. R. Kelner, B. Katz, and J. Rosen, “Optical sectioning using a digital Fresnel incoherent-holography-based confocal imaging system,” Optica 1(2), 70–74 (2014). [CrossRef]  

29. N. Siegel, V. Lupashin, B. Storrie, et al., “High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers,” Nat. Photonics 10(12), 802–808 (2016). [CrossRef]  

30. K. Choi, J. Yim, S. Yoo, et al., “Self-interference digital holography with a geometric-phase hologram lens,” Opt. Lett. 42(19), 3940–3943 (2017). [CrossRef]  

31. W. Sheng, Y. Liu, Y. Shi, et al., “Phase-difference imaging based on FINCH,” Opt. Lett. 46(11), 2766–2769 (2021). [CrossRef]  

32. W. Sheng, Y. Liu, H. Yang, et al., “Polarization-sensitive imaging based on incoherent holography,” Opt. Express 29(18), 28054–28065 (2021). [CrossRef]  

33. R. Kelner, J. Rosen, and G. Brooker, “Enhanced resolution in Fourier incoherent single channel holography (FISCH) with reduced optical path difference,” Opt. Express 21(17), 20131–20144 (2013). [CrossRef]  

34. D. Muhammad, C. M. Nguyen, J. Lee, et al., “Spatially incoherent off-axis Fourier holography without using spatial light modulator (SLM),” Opt. Express 24(19), 22097–22103 (2016). [CrossRef]  

35. T. Nobukawa, Y. Katano, T. Muroi, et al., “Bimodal Incoherent Digital Holography for Both Three-Dimensional Imaging and Quasi-Infinite-Depth-of-Field Imaging,” Sci. Rep. 9(1), 3363 (2019). [CrossRef]  

36. W. Sheng, “General Algorithm for PDI_IDH,” figshare, (2024), https://doi.org/10.6084/m9.figshare.24866706.

Supplementary Material (1)

NameDescription
Code 1       General Algorithm for IDH_CPI

Data availability

The general algorithm is available in Code 1 [36]. Other data are not publicly available at this time but may be obtained from the authors upon reasonable request.

36. W. Sheng, “General Algorithm for PDI_IDH,” figshare, (2024), https://doi.org/10.6084/m9.figshare.24866706.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The schematic of the FINCH system. P is a polarizer, L is a lens, BS is a beam splitter, SLM is a spatial light modulator, and CCD is a camera.
Fig. 2.
Fig. 2. Schematic diagram of lens imaging
Fig. 3.
Fig. 3. (a) The schematic diagram of the triangle interferometer system. (b) The simplified schematic diagram for analysis. The green solid line represents the horizontally polarized component, and the red dashed line represents the vertically polarized component. L1, L2 and L3 are lenses, P1 and P2 are polarizers, PBS is a polarizing beam splitter, and M1 and M2 are mirrors.
Fig. 4.
Fig. 4. (a) and (b) are the amplitude and phase of the reconstructed image. (c) is the phase difference of the object, where the additional phases (constant phase and linear phase generated by the actual system) are not calibrated.
Fig. 5.
Fig. 5. (a), (b) and (c) are the phase differences of the object obtained when α is −0.001, −0.002 and −0.003 respectively, where β=φc = 0. (d) is the phase difference of the object obtained when α=−0.0043 and β=φc = 0. (e) is the phase difference of the object obtained when β=0.0059 and α=φc = 0. (f) is the phase difference of the object obtained when α=-0.0043, β=0.0059 and φc = 0.
Fig. 6.
Fig. 6. (a) and (b) are the theoretical distributions of the intensity and phase difference of the object. (c) and (d) are the amplitude and phase difference of the object obtained experimentally.
Fig. 7.
Fig. 7. (a) and (b) are the intensity and phase difference distributions of the objects when focusing on the circular object. (c) and (d) are the intensity and phase difference distributions of the objects when focusing on the rectangular object.

Tables (1)

Tables Icon

Table 1. The distance parameters in Fig. 3(b)

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

I ( r c ) = | U 1 ( r c ; x 0 , y 0 ) + U 2 ( r c ; x 0 , y 0 ) | 2 d x 0 d y 0 .
H ( r c ) = U 1 ( r c ; x 0 , y 0 ) U 2 ( r c ; x 0 , y 0 ) d x 0 d y 0
u 1 ( r 1 ) = [ A 1 1 e j k D 1 e j k 2 ( x 0 2 + y 0 2 ) b 1 ] u 0 ( r 0 ) ,
u n ( r n ) = [ j = 1 n A j 1 e j k D j e j k 2 ( x j 1 2 + y j 1 2 ) b j ] u 0 ( r 0 ) = A s 1 ( n ) e j k D s ( n ) e j k 2 ( x 0 2 + y 0 2 ) b s ( n ) u 0 ( r 0 ) ,
A s ( n ) = j = 0 n A j , where A j = { z i j / z o j , j > 0 1 , j = 0 ,
D s ( n ) = j = 1 n D j , where D j = z i j z o j ,
b s ( n ) = j = 1 n [ A s ( j 1 ) ] 2 b j , where b j = ( A j 1 ) z o j 1 .
S ( r c ; r n , z ) = e j k ( x c x n ) 2 + ( y c y n ) 2 + ( 0 z ) 2 ( x c x n ) 2 + ( y c y n ) 2 + ( 0 z ) 2 e j k ( z ) ( z ) e j k 2 ( z ) [ ( x c x n ) 2 + ( y c y n ) 2 ] ,
U 1 ( r c ; x 0 , y 0 ) = A s 1 2 ( n 1 ) u n 1 ( r n 1 ) S ( r c ; r n 1 , z 1 ) ,
U 2 ( r c ; x 0 , y 0 ) = A s 2 2 ( n 2 ) u n 2 ( r n 2 ) S ( r c ; r n 2 , z 2 ) .
u n 1 ( r n 1 ) u n 2 ( r n 2 ) = [ A s 1 1 e j k D s 1 e j k 2 ( x 0 2 + y 0 2 ) b s 1 u 01 ( r 0 ) ] [ A s 2 1 e j k D s 2 e j k 2 ( x 0 2 + y 0 2 ) b s 2 u 02 ( r 0 ) ] = ( A s 1 A s 2 ) 1 e j k ( D s 1 D s 2 ) e j k 2 ( x 0 2 + y 0 2 ) ( b s 1 b s 2 ) a 01 ( r 0 ) a 02 ( r 0 ) e j Δ φ ( r 0 ) ,
S ( r c ; r n 1 , z 1 ) S ( r c ; r n 2 , z 2 ) = e j k ( z 1 ) ( z 1 ) e j k 2 ( z 1 ) [ ( x c A s 1 x 0 ) 2 + ( y c A s 1 y 0 ) 2 ] e j k z 2 ( z 2 ) e j k 2 z 2 [ ( x c A s 2 x 0 ) 2 + ( y c A s 2 y 0 ) 2 ] = e j k ( z 2 z 1 ) z 1 z 2 e j k 2 ( x 0 2 + y 0 2 ) ( A s 2 A s 1 ) 2 z 2 z 1 e j k 2 ( Z i ) [ ( x c X i ) 2 + ( y c Y i ) 2 ] ,
Z i = z 2 z 1 z 2 z 1 ,
A i = z 2 A s 1 z 1 A s 2 z 2 z 1 .
S ( r c ; r n 1 , z 1 ) S ( r c ; r n 2 , z 2 ) = [ e j k ( z 2 z 1 + Z i ) ( z 2 z 1 ) e j k 2 ( x 0 2 + y 0 2 ) ( A s 2 A s 1 ) 2 z 2 z 1 ] S ( r c ; R i , Z i ) .
U 1 ( r c ; x 0 , y 0 ) U 2 ( r c ; x 0 , y 0 ) = ( j λ ) 1 A i 2 a i ( R i ) e j ϕ i ( R i ) S ( r c ; R i , Z i ) ,
a i ( R i ) = A i 2 λ | A s 1 A s 2 ( z 2 z 1 ) 1 | a 01 ( r 0 ) a 02 ( r 0 ) ,
e j ϕ i ( R i ) = sign [ A s 1 A s 2 ( z 2 z 1 ) ] ( j ) e j k [ ( D s 1 D s 2 + z 2 z 1 ) + Z i ] e j k 2 ( x 0 2 + y 0 2 ) B e j Δ φ ( r 0 ) = sign [ A s 1 A s 2 ( z 2 z 1 ) ] e j ( k Z i π 2 ) e j k 2 ( X i 2 + Y i 2 ) B A i 2 e j Δ φ ( r 0 ) ,
B = b s 1 b s 2 + ( A s 2 A s 1 ) 2 ( z 2 z 1 ) 1 .
H ( r c ) = e j k ( Z i ) j λ ( Z i ) [ a i ( R i ) e j ϕ i ( R i ) ] e j k 2 ( Z i ) [ ( x c X i ) 2 + ( y c Y i ) 2 ] d X i d Y i .
a i ( R i ) e j ϕ i ( R i ) = e j k Z i j λ Z i H ( r c ) e j k 2 Z i [ ( X i x c ) 2 + ( Y i y c ) 2 ] d x c d y c .
ϕ c ( r c ) = exp [ j k p = 0 ( α p x c p ) ] exp [ j k q = 0 ( β q y c q ) ] ,
ϕ c ( r c ) = e j φ c e j k ( α x c + β y c ) ,
a i ( R i ) e j ϕ i ( R i ) = e j k Z i j λ Z i [ H ( r c ) e j φ c e j k ( α x c + β y c ) ] e j k 2 Z i [ ( X i x c ) 2 + ( Y i y c ) 2 ] d x c d y c .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.