Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Space-variant point spread function measurement and interpolation at any depth based on single-pixel imaging

Open Access Open Access

Abstract

Point spread function (PSF) is important for evaluating an optical system and image deblurring. In this paper, we proposed a method to measure space-variant PSF at any depth based on single-pixel imaging (SPI), and we initiated a depth-variant PSF interpolation model. In our method, we regarded space-variant PSF as light transport coefficients from object points to image pixels. By applying SPI to each image pixel to obtain these light transport coefficients at different depths, the PSF of each object point can be extracted. The depth calculation of PSF is based on multi-frequency heterodyne phase-shifting principles and perspective-n-point (PnP) algorithm. In our PSF interpolation model, we interpolated the light transport coefficients from different object points to an image pixel first. We then obtained the interpolated PSF indirectly from the interpolated coefficients. With simple experimental facilities containing a digital camera and a liquid crystal display (LCD) screen to display and capture specific patterns, which relative distance is changed, the proposed method accurately obtained the space-variant PSF at any depth. Without complicated calculation, PSF at a certain depth can be interpolated from the PSF measured data at another depth with our PSF interpolation method. Significant similarities exist between the interpolated PSF and directly measured PSF. Our work is a successful attempt in using SPI to solve traditional optical problems.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Point spread function (PSF) is an impulsive response of an optical system, which can represent the quality of optical system. PSF is a widely used concept in Fourier optics [1,2], astronomic imaging [35], medical imaging [6,7], electron microscopy [8], lithography [9], and so on. It can be regarded as liner and position-invariant in simple situations [10]. However, realistic spatially-variant situations cannot be set aside. An accurate PSF measurement and interpolation result has practical and scientific importance.

Traditional PSF acquiring methods can be divided into three categories [11], namely, blind PSF estimation, non-blind PSF estimation, and PSF direct measurement methods.

Blind PSF estimation using natural scene images means estimating PSF from acquired images without any prior information. This method generally relies on sharp edges in images [12] or two photographs at different distances [13]. Through these methods, the accuracy of PSF is affected by the features that images contain, which are unusually unstable.

The second category of methods obtain PSF by taking images of certain patterns, such as checkerboards [12] and disks [14]. These patterns always contain sharp edges. The second method uses prior information, such as the shape of patterns. Thus, this method is more stable and accurate. The first two categories of methods usually regard PSF as Gaussian model. These two methods are used to estimate the parameters of Gaussian model, which is an ideal model.

The last category of methods obtain PSF by taking images of point-like sources [15,16]. These methods are simple and reliable, which directly measure the ground-truth PSF. But there are some requirements. On the one hand, the point-like sources must be very small. On the other hand, the intensity of the source mush be strong enough. If these two requirements cannot be well be satisfied, the SNR (signal-to-noise ratio) of the measured PSF will be low.

Depth-variant PSF (DV-PSF) is essential for DV image restoration algorithm. However, measuring DV-PSF constantly is not feasible. Thus, DV-PSF is interpolated using few measured PSF data. Multiple DV imaging models are presented to solve the interpolation problem of DV-PSF, such as strata-based and PCA-based DV imaging models [17], which require the PSF data of at least two different depths to interpolate another unknown PSF.

Single-pixel imaging (SPI) is an innovative technique which uses correlation measurements to obtain images from non-pixelated detection, which has been widely researched and applied in some areas [18,19]. Fourier single-pixel imaging (FSI) is proposed and demonstrated [20,21]. FSI can acquire images with high SNR based on Fourier spectrum acquisition method. FSI-based PSF acquirement method, which can measure PSF with high SNR, is presented in our previous study [11].

The previous method was further developed. In this work, we have two main contributions. First, the space-variant PSF (SV-PSF) is measured based on SPI, and the accurate depth of each PSF is also obtained. Second, a new PSF interpolation model is presented. Our method has advantages over previous approaches. First, an accurate depth information of each PSF can be extracted. Second, PSF measurement result at a certain depth along with the focus distance are sufficient to interpolate unknown PSF.

This paper is organized as follows. Section 2 presents the principles of our method, including 3D image formation model, our PSF model and interpolation method, SPI-based PSF measurement, and depth measurement method of PSF. Section 3 shows our experiment results, whereas Section 4 discusses our conclusions.

2. Principles

A method to measure the SV-PSF and corresponding depths is presented, and a DV-PSF interpolation model is initiated. In contrast to traditional methods, a digital camera with lens is used for testing, while a liquid crystal display (LCD) screen is used to show specific patterns, as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Experimental setup. An LCD screen is used to display the patterns. A digital camera with the lens to be tested captures the images. Each pixel of the image sensor can be regarded as a single pixel detector. The distance between the camera and the screen is changed in our experiment.

Download Full Size | PDF

The framework for PSF measurement and interpolation comprises the following steps:

Step 1: Measuring SV-PSF at different depths. By displaying and capturing certain sinusoid patterns and applying SPI-tech, the Fourier spectrum of the light transport coefficients from LCD screen points to image sensor pixels is acquired, which can be converted into SV-PSF.

Step 2: Measuring the depth of PSF. By applying multi-frequency heterodyne phase-shifting principles and Perspective-n-Point (PnP) algorithm, the relative pose between the screen and camera is calculated, from which the depth of each PSF is obtained.

Step 3: Modeling and interpolating of PSF. An initial PSF model is established, from which DV-PSF interpolation can be performed using light transport coefficients measured result at one depth along with the focus distance.

This paper gives a 3D image formation mode. Section 2.1 explains how PSF at different depths result in blurred images when camera captures a 3D scene. Section 2.2 is our PSF model and interpolation method. Section 2.3 introduces how to measure SV-PSF by applying SPI. Section 2.4 shows how the depth of each PSF is calculated by multi-frequency heterodyne phase-shifting principles and PnP algorithm.

2.1 3D image formation model using SV-PSF

In traditional image formation model, the degradation of 2D image [10] is shown as follows:

$$I(u,v) = \int\!\!\!\int\limits_\Omega {O(x,y)\cdot h(x,y,u,v)dxdy} ,$$
where (x, y) represents the coordinates of the 2D object plane; (u, v) represents the coordinates of image plane; h(x, y, u, v) represents light transport coefficients between object points (x, y) to image pixels (u, v), which also represents SV-PSF; O(x, y) represents the input scene; I(x, y) is the captured image; and Ω is the area where the optical system can observe.

In Eq. (1), PSF changes with different positions of object points. However, PSF varies with the depths of object points. If we consider depth, then image formation model can be extended as follows:

$$I(u,v) = \int\!\!\!\int\limits_\Sigma {O(x,y,z)\cdot h(x,y,z;u,v)} dS,$$
where Σ is the 3D surface in the field of view; (x, y, z) represents the points on Σ; O(x, y, z) is the reflectivity or the light intensity of object surface; z is the depth of the object points; and h(x, y, z; u, v) is SV-PSF which contains depth information. According to the first type surface integral, Eq. (2) can be transformed into:
$$I(u,v) = \mathop{\int\!\!\!\int}\limits_{{\Omega _{xy}}} {O(x,y,z(x,y))\cdot h(x,y,z(x,y);u,v)\sqrt {1 + {{(\frac{{\partial z}}{{\partial x}})}^2} + {{(\frac{{\partial z}}{{\partial x}})}^2}} } dxdy,$$
where Ωxy represents the projection area of Σ on plane O-xy.

To simplify the problem, this paper assumes that the LCD screen is vertical to optical axis of camera. Thus, the depth of each pixel on LCD screen is equal. Then Eq. (3) can be written as follows:

$$I(u,v) = \mathop{\int\!\!\!\int}\limits_{{\Omega _{xy}}} {O(x,y,z)\cdot h(x,y,z;u,v)} dxdy,$$
where Ωxy represents the object area at equal depth involved in imaging. Equations (1) and (4) have the same form. The main work in this paper is to measure h(x, y, z; u, v), SV-PSF on different object planes, and measure the different depths z. PSF h(x, y, z1; u, v) at depth z1 and focus distance zf is used to interpolate PSF h(x, y, z2; u, v) at depth z2.

2.2 Depth-variant PSF model by interpolation

A fundamental hypothesis is presented in our model, that is, the light from one object point on focus plane will finally be collected by one pixel of image sensor. In other words, the defocus effect and other spread effect at the focus plane can be set aside.

Based on the above hypothesis, a significant conclusion can be made. The light collected by one image sensor pixel, all passes through a certain point on the focus plane.

To express our model better, the actual situation was simplified to a 1-D interpolating situation, as shown in Fig. 2. The response of image sensor pixel to the ray of light at certain direction is definite and expressed as follows:

$$h({y_1},{z_1};{u_0}) = h({y_2},{z_2};{u_0}),$$
where y1 and y2 are the ordinates on plane at different depths z1 and z2, and u0 correspond to the image pixel of focus point yf on focus plane zf. The relationship between these coordinates can be expressed as follows:
$$\frac{{{y_1} - {y_2}}}{{{y_2} - {y_f}}} = \frac{{{z_2} - {z_1}}}{{{z_f} - {z_2}}} = \alpha .$$

 figure: Fig. 2.

Fig. 2. A schematic diagram of our PSF model. The light collected by one image sensor pixel passes through a certain point on the focus plane. The response of image sensor pixel to the ray of light at certain direction is definite.

Download Full Size | PDF

Based on the analysis above, PSF on plane z2 is represented using measured light transport coefficients at z1 and the focus distance zf. The PSF at depth z2 is interpolated indirectly. On opposite side of the focus plane, interpolation of PSF on plane z3 follows the same rule. Figure 3 shows that PSF interpolation method is divided into two steps.

 figure: Fig. 3.

Fig. 3. A schematic diagram of our PSF model and interpolation method. The first step is doing light transport coefficients interpolation, the second step is recombine the object PSF.

Download Full Size | PDF

Step 1: Interpolating the light transport coefficients h(x, y, z2; u0, v0) using h(x, y, z1; u0, v0). Considering Eqs. (5) and (6) in real DV-PSF interpolation, the interpolation process can be expressed as follows:

$$\left[ \begin{array}{l} {x_2}\\ {y_2}\\ {z_2} \end{array} \right] = \left[ \begin{array}{l} \alpha + 1\;\;\;\;0\;\;\;\;0\\ 0\;\;\;\;\;\;\alpha + 1\;\;0\\ 0\;\;\;\;\;\;\;\;0\;\;\;\;\;1 \end{array} \right]\left[ \begin{array}{l} {x_1}\\ {y_1}\\ {z_1} \end{array} \right] - \alpha \left[ \begin{array}{l} {x_f}\\ {y_f}\\ 0 \end{array} \right],$$
$$h({x_2},{y_2},{z_2};{u_0},{v_0})\textrm{ = }h({x_1},{y_1},{z_1};{u_0},{v_0}).$$
where α is expressed in Eq. (6), and (xf, yf) is the point on focus plane at zf corresponding to image sensor pixel (u0, v0). The calculation method of depth is presented in Section 2.4.

Figure 4 shows an example when α = 0.6 is given, and the interpolated h(x, y, z2; u0, v0) is a subpixel result, which is supposed to be resampled to obtain result having the same pixel scale as h(x, y, z1; u0, v0).

 figure: Fig. 4.

Fig. 4. Where α = 0.6, the process of interpolation and resampling the light transport coefficients. After interpolation, the size of a 5×5 light transport coefficients array is shrink into a 3×3 area.

Download Full Size | PDF

Step 2: Adapting step 1 to all image sensor pixels, from which we can obtain h(x, y, z2; u, v). Thus, the PSF h(x0, y0, z2; u, v) of each point (x0, y0, z2) on plane at depth z2 can be obtained.

2.3 SPI-based PSF measurement

In our SPI-based SV-PSF measurement method, a digital camera is used for recording the Fourier coefficients by capturing sinusoidal-structured patterns of different frequency displayed by an LCD screen [11], the LCD screen is vertical to optical axis of camera. By doing inverse Fourier transform (IFT), an image from each single pixel of image sensor can be obtained, which represents the light transport coefficients from screen points to one image sensor pixel. The coefficients from one screen point to different image sensor pixels are then combined. Thereafter, PSF is reconstructed.

The FSI sinusoidal-structured patterns, which are shown by LCD screen, can be written as [20]:

$${P_\varphi }(x,y,z;{f_x},{f_y}) = a + b\cdot \cos (2\pi {f_x}x + 2\pi {f_y}y + \varphi ),$$
where (x, y, z) represents the points on the LCD screen; z is the distance between the LCD screen and the camera; (fx, fy) represents the spatial frequency; φ is the initial phase; a is the average intensity of the display image; and b is the lightness amplitude. If the resolution of LCD screen is M×N, then the patterns must satisfy fx=i/M and fy=j/N to get the light transport coefficients from all the screen points to one image sensor pixels, where i=0, 1, 2, …, M-1 and j=0, 1, 2, …, N-1. Each combination of (fx, fy) correspond to a pattern image.

According to Eq. (4), the response of the digital camera to the display patterns is as follows:

$${R_\varphi }({u,v;{f_x},{f_y}} )= \int\!\!\!\int\limits_\Omega {{P_\varphi }(x,y,z;{f_x},{f_y}) \cdot h(x,y,z;u,v)dxdy} + {R_n},$$
where Ω represents the screen area that displays patterns; (u, v) represents the pixel coordinates of image; h(x, y, z; u, v) represents the SV-PSF on plane at distance z, Rn is the effect of environment light.

For each combination of spatial frequency (fx, fy), the four-step phase measurement method is used. The screen displays patterns with an initial phase φ (0, π/2, π, 3π/2). For one pixel (u0, v0), the corresponding Fourier coefficients obtained are as follows:

$$\begin{array}{ll} H({{u_0},{v_0};{f_x},{f_y}} )&= \frac{1}{{2b}} \cdot [({R_0} - {R_\pi }) + j({R_{\pi /2}} - {R_{3\pi /2}})]\\ \textrm{ } &= \int\!\!\!\int\limits_\Omega {h(x,y,z;{u_0},{v_0}) \cdot \exp [{ - j2\pi ({f_x}x + {f_y}y)} ]} dxdy. \end{array}$$
where exp[-j2π(fxx + fyy)] represents the 2D Fourier transformation kernel. After acquiring all Fourier coefficients, applying IFT:
$$h({x,y,z;{u_0},{v_0}} )= IFT[{H({u_0},{v_0};{f_x},{f_y})} ].$$
Equations (9) to (12) prove that obtaining light transport coefficients is possible by displaying and capturing Fourier sinusoidal-structured patterns. At the left of Eq. (9), h(x, y, z; u0, v0) is a matrix, which represents the light transport coefficients from all screen points (x, y, z) to one image sensor pixel (u0, v0). The resolution of h(x, y, z; u0, v0) is M×N. If we apply this SPI method to all image sensor pixels, we can obtain the SV-PSF h(x, y, z; u, v) in the field of view at distance z.

In images having a resolution of M×N using SPI-tech, the number of patterns should be M×N×4/2 if the conjugate symmetry of Fourier domain is considered. This process requires a long period of time.

In fact, most values in the M×N matrix are 0. If the most effective values in matrix h(x, y, z; u0, v0) are concentrated in a m×n small area, then the resolution of pattern needed is m×n, which is lesser than the image resolution M×N. The number of patterns needed is m×n×4/2. The frequency of the patterns in a small area is fx = i/m, fy = j/n, where i=0, 1, 2, …, m-1 and j=0, 1, 2, …, n-1. The M×N patterns shown on LCD is generated by doing periodic extension to small m×n patterns. By applying SPI, we can calculate h(x, y, z; u0, v0) with a resolution of m×n.

2.4 Depth calculation for each PSF

Depth information is necessary when conducting proposed PSF interpolation method. It is the distance between the LCD screen and the optical center of camera. By matching object points with image pixels and solving the PnP problem, the extrinsic camera parameters can be calculated, from which the depth of each PSF can be obtained. The matching principle is multi-frequency heterodyne phase-shifting [22]. An LCD screen is used to display the sinusoidal fringes. The fringes are expressed as follows:

$${I_i}({x,y} )= A({x,y} )+ B({x,y} )\cos [{\Phi ({x,y} )+ i \cdot {{2\pi } \mathord{\left/ {\vphantom {{2\pi } N}} \right.} N}} ]\quad\quad \textrm{ }i = 0,1,2,\;\ldots ,N - 1,$$
where (x, y) is the coordinate on LCD screen; A(x, y) is the average intensity of the fringe; B(x, y) is the modulation intensity; and Φ(x, y) is the phase of the displaying fringe, N = 4. The response of the camera can be expressed as follows:
$${R_i}({u,v} )= a + b\cos [{\Phi ({u,v} )+ i \cdot {{2\pi } \mathord{\left/ {\vphantom {{2\pi } N}} \right.} N}} ]\quad\quad \textrm{ }i = 0,1,2,\;\ldots ,\;N - 1.$$
The phase of captured fringe is:
$$\Phi (u,v) = \arctan \frac{{{R_3}(u,v) - {R_1}(u,v)}}{{{R_0}(u,v) - {R_2}(u,v)}}.$$
The phase Φ(u, v) in Eq. (15) is periodic variant in (-π, π]. A sinusoidal fringe with different frequencies is used to unfold the phase. As such, we can remove the 2π period and obtain a continuous phase mapφ(u, v) with the heterodyne principle [22,23]. Then, object point is located corresponding to image point as follows:
$$\phi (u,v) = \Phi (x,y).$$
The conversion relation from image coordinates (u, v) to screen coordinates (x, y) can be written as follows:
$$\left\{ \begin{array}{l} x = \frac{{\phi (u,v)}}{{2\pi }} \cdot \lambda ,\textrm{ }if\textrm{ }\Phi (x,y) = 2\pi \frac{x}{\lambda }\\ y = \frac{{\phi (u,v)}}{{2\pi }} \cdot \lambda ,\textrm{ }if\textrm{ }\Phi (x,y) = 2\pi \frac{y}{\lambda } \end{array} \right.,$$
where λ is the spatial frequency of the fringe. Equation (17) shows that, given any image pixel coordinate (u, v) under O-uv, the corresponding screen point coordinate (x, y, 0) can be calculated.

The camera intrinsic parameters can be solved by Zhang’s calibration method with calibration board, and distortion coefficients can be solved at the same time [24]. As object points are matched with image pixels, the following extrinsic parameters can be obtained by applying PnP algorithm: R, t, which devotes the transformation from screen coordinates to camera coordinates.

A point on screen (x, y, 0), there is a transformation from screen coordinates to camera coordinates:

$$\left[ {\begin{array}{c} {{x_c}}\\ {{y_c}}\\ {{z_c}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{cc} {\textbf R}&{\textbf t}\\ {{0^T}}&1 \end{array}} \right]\left[ {\begin{array}{c} x\\ y\\ 0\\ 1 \end{array}} \right].$$
Thus, calculating depth zc, distance between the camera, and LCD screen would be easy by using Eq. (18). For each PSF obtained, its depth to camera can be calculated. Figure 5 shows the progress in measuring PSF and calculating the depths.

 figure: Fig. 5.

Fig. 5. PSF measurement at different depth and depth calculation. Where zf is the focus distance. The distance between the camera and the screen is changed by moving camera.

Download Full Size | PDF

3. Experiment

The experimental setup is shown in Fig. 6. An LCD screen is used to show patterns, and a digital camera with the lens to be tested is used to capture images. A workstation is used to control LCD screen, camera, and perform data processing. The camera is fixed on a magnetic base, the screen and the camera along with the magnetic base are both placed on an optical table. The model of the LCD screen is Philips 242P6VPJKEB. Its resolution is 3840×2160, and its pixel size is 0.2715 mm×0.2715 mm. The model of the camera is Basler acA1600-20gm, which is a grayscale camera with a resolution of 1600×1200 and a pixel size of 4.5µm × 4.5µm. The focus length of lens is 8 mm, and F-number is set to 4.0. We move the magnetic base to change the distance between the camera and the screen.

 figure: Fig. 6.

Fig. 6. Experimental setup

Download Full Size | PDF

To align the optical axis of the camera to be perpendicular to the LCD screen, a rectangle is displayed on the LCD screen. Then we adjust the pose of the camera until the rectangle is in the center of the captured image and the perspective effect cannot be observed. If the perpendicular between the optical axis of the camera and the LCD screen is not too well, it will lead a low quality of the interpolated PSFs.

Considering the potential impact of the intensity nonlinearity of the LCD screen, gamma correction [25] of the LCD screen is required before displaying patterns. At any depth, the resolution of the screen area corresponding to the image sensor must be higher than the resolution of the image sensor, and the view of camera must be covered by the screen.

3.1 Solving the depth of PSF

Zhang’s calibration method was used to solve the intrinsic parameter of the camera. By displaying and capturing triple-frequency four step phase-shifted sinusoidal fringe, the screen points corresponding to image points are then calculated. The depth of each PSF is also calculated. The total number of fringes is 24.

Following the principle of phase synthesis [26], the width of synthesis phase must cover the screen area, where the lateral resolution of the camera is M = 3840. The values of λ1, λ2, λ3 are 19, 20, and 21 in our experiment, respectively. The requirements are met.

Figure 7 displays part of the fringes shown on screen, captured by the camera, and part of the unwrapped phase image, which is used finally for matching and PnP solving.

 figure: Fig. 7.

Fig. 7. Phase unwrap process of captured image. (a) The fringe (part) shown on screen. (b) The fringe (part) captured by the camera. (c) The unwrapped phase image.

Download Full Size | PDF

Following the principles in Section 2.4, the screen points can be matched with the image pixels. The extrinsic parameters can be solved, and the distance between the screen and the camera can be measured. R is converted into Euler angles (‘xcyczc order) to represent the perpendicularity between the optical axis of the camera and the LCD screen. RMSE (re-projection error) can represent the accuracy of PnP. The results are given in Table 1.

Tables Icon

Table 1. Different depths of camera to measure PSF.

3.2 PSF measurement result at any depth

In our experiment, PSF is measured at five different depths. The camera is focused at 360 mm. Fourier sinusoidal-structured patterns are used, with frequencies fx and fy in range [0, 23/24]. The total number of patterns is 1152. 5×7 uniform distribution positions on the image sensor plane are selected for the computation of light transport coefficients and PSF, which start at (200,200). One position is selected every 200 pixels. It takes about 251s to acquire the datasets at a single depth, and it takes about 9s to calculate 35 PSFs, and 16s to read 2.06GB captured patterns from the hard disk.

The normalized light transport coefficients are shown in Fig. 8. Figure 9 shows the normalized SV-PSF at different measured depths, from which a general recognition for the SV-PSF of the camera can be had.

 figure: Fig. 8.

Fig. 8. Applying SPI to different image sensor pixels at five different depths. Resolution of single matrix is 21×21, which represents light transport coefficients from screen points to single image sensor pixel.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The SV-PSF at five different depths. Resolution of single matrix is 13×13

Download Full Size | PDF

The PSF results shown in Fig. 9 mean that PSF does not only vary in the field of view at certain depth but also varies with the change of depth. The shape of PSF is irregular. Traditional Gaussian model cannot fit the PSF well at certain positions.

In a fixed position in the view, if the measurement plane is farther to the focus plane, then the physic size of the PSF is larger. Because, when the camera is defocused, generally, one pixel can receive light from a wide area, which causes a larger size of PSF.

Figures 10 and 11 show the detailed SPI result of single pixel and the PSF result at the center in the image, where (u, v) = (800,600).

 figure: Fig. 10.

Fig. 10. The light transport coefficients from different screen points to image sensor pixel (800, 600) at five different depths.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The corresponding PSF of image sensor pixel (800, 600) at different depths.

Download Full Size | PDF

Figures 10 and 11 also show that the shape of PSF varies when the object plane is at the opposite sides of focus plane. This is because the PSF is not an even function about the distance to the focus plane [27]. This occurrence leads to the difference of PSF at opposite sides of focus plane, even when absolute distance from the focus plane is equal. Light transport coefficients from LCD screen points to camera pixel are similar to PSF result in shape because PSF is nearly invariant in a small area.

The experiment verifies that our method can measure SV-PSF at any depth.

3.3 PSF interpolation experiment

PSF is interpolated using our proposed method in 2.2. We implement our method to obtain the interpolated PSF at depth z2 = 312 mm using measured light transport coefficients at depth z1 = 258 mm and to obtain the interpolated PSF at depth z3 = 409 mm using measured light transport coefficients at depth z4 = 464 mm, and zf = 359 mm. The interpolated results are shown in Figs. 12 and 13.

 figure: Fig. 12.

Fig. 12. Interpolated PSF at depth z2 = 312 mm using measured PSF at depth z1 = 258 mm. (a) Measured PSF at depth = 312 mm. (b) Interpolated PSF at depth = 312 mm using measured light transport coefficients at depth = 258 mm. (c) Error of measured PSF and interpolated PSF.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Interpolated PSF at depth z3=409 mm using measured PSF at depth z4 = 464 mm. (a) Measured PSF at depth = 409 mm. (b) Interpolated PSF at depth = 312 mm using measured light transport coefficients at depth = 464 mm. (c) Error of measured PSF and interpolated PSF.

Download Full Size | PDF

Figures 12 and 13 show that the PSF interpolation model can well represent the PSFs not directly measured at some depths. The experiment shows that PSF at a certain depth can be interpolated from the light transport coefficients at another depth and focus distance with our PSF interpolation method. Which could not be achieved by using traditional PSF interpolation methods.

And the PSF interpolation error varies because our interpolation method has some fundamental hypothesis, which is given at Section 2.2. While in a real optical system, such condition cannot be well satisfied for the whole field of view because of different kinds of aberrations. But our interpolation method works well in most situations. When highly irregular PSFs exist, the depth sampling interval should be small to reduce the interpolation error.

To evaluate the PSF model and interpolation method, the captured images, the images deblurred with measured PSF, the images deblurred with interpolated PSF, and the original sharp images are used for calculating the correlation coefficient (CC) [11,28] and the structural similarity index (SSIM) [28]. When the values of them are closer to one, two corresponding images are more similar. We deconvolve the input images using the Lucy-Richardson algorithm.

When the LCD is place at z2 and z3, a sharp image is displayed on the screen and captured by the camera. And the measured PSF and the interpolated PSF are both used to deblur the captured image. Figures 14 and 15 show the four images at depths z2 and z3 respectively.

 figure: Fig. 14.

Fig. 14. Comparison of the original sharp image, captured image, the images deblurred with measured PSF and interpolated PSF at depth z2 = 312 mm. (a) Original sharp image displayed on the LCD screen. (b) Image captured by the camera. (c) Image deblurred using measured PSF. (d) Image deblurred using interpolated PSF.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Comparison of the original sharp image, captured image, images deblurred with measured PSF and interpolated PSF at depth z3 = 409 mm. (a) Original sharp image displayed on the LCD screen. (b) Image captured by the camera. (c) Image deblurred using measured PSF. (d) Image deblurred using interpolated PSF.

Download Full Size | PDF

The CC and SSIM are calculated among the sharp image, the captured image, image deblurred using measured PSF and image deblurred using interpolated PSF. The results at different depths z2 and z3 are shown in Tables 2 and 3, and Tables 4 and 5 respectively.

Tables Icon

Table 2. CC among images at depth z2 = 312 mm.

Tables Icon

Table 3. SSIM among images at depth z2 = 312 mm.

Tables Icon

Table 4. CC among images at depth z3 = 409 mm.

Tables Icon

Table 5. SSIM among images at depth z3 = 409 mm.

From the comparison among the sharp image and other images at the first column in Tables 234 and 5, we can find that the .image quality can be improved by both the measured and the interpolated PSF to deblur the captured image since the SSIM and CC is larger. CC and SSIM between the sharp image and the image deblurred using measured PSF is larger than CC and SSIM between the sharp image and the Image deblurred using interpolated PSF, which indicates that the performance of the measured PSF is better than the interpolated PSF. And we can also find that image similarity between the image deblurred using measured PSF and interpolated PSF is very high from the CC and SSIM.

4. Conclusion

Detailed steps were followed to measure SV-PSF based on SPI method at any depth with a digital camera and an LCD screen. We measured the depth by applying multi-frequency heterodyne phase-shifting principles and PnP algorithm. Furthermore, we initiated a DV-PSF interpolation model. Experiments showed that our method can measure SV-PSF at any depth. Significant similarities exist between the interpolated PSF and the directly measured PSF. Our work is a successful application of using SPI to solve traditional optical problems.

Funding

National Natural Science Foundation of China (61735003, 61875007); Program for Changjiang Scholars and Innovative Research Team in University (IRT_16R02); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc).

Disclosures

The authors declare no conflicts of interest.

References

1. R. Kotynski, “Fourier optics approach to imaging with sub-wavelength resolution through metal-dielectric multilayers,” Opto-Electron. Rev. 18(4), 366–375 (2010). [CrossRef]  

2. D. M. Wulstein and R. McGorty, “Point-spread function engineering enhances digital Fourier microscopy,” Opt. Lett. 42(22), 4603–4606 (2017). [CrossRef]  

3. R. H. Chan, X. M. Yuan, and W. X. Zhang, “Point-spread function reconstruction in ground-based astronomy by l1-lp model,” J. Opt. Soc. Am. A 29(11), 2263–2271 (2012). [CrossRef]  

4. J. B. Breckinridge, W. S. T. Lam, and R. A. Chipman, “Polarization aberrations in astronomical telescopes: the point spread function,” Publ. Astron. Soc. Pac. 127(951), 445–468 (2015). [CrossRef]  

5. M. Řeřábek, P. Páta, K. Fliegel, J. Švihlik, and P. Koten, “Space variant point spread function modeling for astronomical image data processing,” Proc. SPIE 6691, 66910T (2007). [CrossRef]  

6. H. C. Shin, R. Prager, J. Ng, H. Gomersall, N. Kingsbury, G. Treece, and A. Gee, “Sensitivity to point-spread function parameters in medical ultrasound image deconvolution,” Ultrasonics 49(3), 344–357 (2009). [CrossRef]  

7. N. Zhao, Q. Wei, A. Basarab, D. Kouamé, and J. Y. Tourneret, “Blind deconvolution of medical ultrasound images using a parametric model for the point spread function,” in Proceedings of IEEE International Ultrasonics Symposium (IEEE, 2016), pp.1–4.

8. A. R. Lupini and N. De Jonge, “The three-dimensional point spread function of aberration-corrected scanning transmission electron microscopy,” Microsc. Microanal. 17(5), 817–826 (2011). [CrossRef]  

9. V. R. Manfrinato, J. G. Wen, L. H. Zhang, Y. J. Yang, R. G. Hobbs, B. Baker, D. Su, D. Zakharov, N. J. Zaluzec, D. J. Miller, E. A. Stach, and K. K. Berggren, “Determining the resolution limits of electron-beam lithography: direct measurement of the point-spread function,” Nano Lett. 14(8), 4406–4412 (2014). [CrossRef]  

10. R. C. Gonzalez and R. E. Woods, Digital Image Processing (4th ed.) (Pearson, 2017).

11. H. Jiang, Y. Liu, X. Li, H. Zhao, and F. Liu, “Point spread function measurement based on single-pixel imaging,” IEEE Photonics J. 10(6), 1–15 (2018). [CrossRef]  

12. N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation using sharp edge prediction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 3823–3831.

13. M. Delbracio, A. Almansa, J. M. Morel, and P. Muse, “Subpixel point spread function estimation from two photographs at different distances,” SIAM J. Imaging Sci. 5(4), 1234–1260 (2012). [CrossRef]  

14. F. Mannan and M. S. Langer, “Blur calibration for depth from defocus,” in Proceedings of 13th Conference on Computer And Robot Vision (IEEE, 2016), pp. 281–288.

15. J. Jemec, F. Pernuš, B. Likar, and M. Bürmen, “2D sub-pixel point spread function measurement using a virtual point-like source,” Int. J. Comput. Vis. 121(3), 391–402 (2017). [CrossRef]  

16. Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in Proceedings of European Conference on Computer Vision (Springer Berlin Heidelberg, 2012), pp. 42–56.

17. N. Patwary and C. Preza, “Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions,” Biomed. Opt. Express 6(10), 3826–3841 (2015). [CrossRef]  

18. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

19. S. Jiang, X. Y. Li, Z. X. Zhang, W. J. Jiang, Y. P. Wang, G. B. He, Y. R. Wang, and B. Q. Sun, “Scan efficiency of structured illumination in iterative single pixel imaging,” Opt. Express 27(16), 22499–22507 (2019). [CrossRef]  

20. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

21. Y. Xiao, L. N. Zhou, and W. Chen, “Fourier spectrum retrieval in single-pixel imaging,” IEEE Photonics J. 11(2), 1–11 (2019). [CrossRef]  

22. C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997). [CrossRef]  

23. C. Zuo, L. Huang, M. L. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser. Eng. 85, 84–103 (2016). [CrossRef]  

24. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

25. Y. Matsushita, “Radiometric Response Function,” in Computer Vision: A Reference Guide, K. Ikeuchi, (ed.) (Springer, 2014).

26. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]  

27. P. A. Stokseth, “Properties of a defocused optical system,” J. Opt. Soc. Am. 59(10), 1314–1321 (1969). [CrossRef]  

28. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Experimental setup. An LCD screen is used to display the patterns. A digital camera with the lens to be tested captures the images. Each pixel of the image sensor can be regarded as a single pixel detector. The distance between the camera and the screen is changed in our experiment.
Fig. 2.
Fig. 2. A schematic diagram of our PSF model. The light collected by one image sensor pixel passes through a certain point on the focus plane. The response of image sensor pixel to the ray of light at certain direction is definite.
Fig. 3.
Fig. 3. A schematic diagram of our PSF model and interpolation method. The first step is doing light transport coefficients interpolation, the second step is recombine the object PSF.
Fig. 4.
Fig. 4. Where α = 0.6, the process of interpolation and resampling the light transport coefficients. After interpolation, the size of a 5×5 light transport coefficients array is shrink into a 3×3 area.
Fig. 5.
Fig. 5. PSF measurement at different depth and depth calculation. Where zf is the focus distance. The distance between the camera and the screen is changed by moving camera.
Fig. 6.
Fig. 6. Experimental setup
Fig. 7.
Fig. 7. Phase unwrap process of captured image. (a) The fringe (part) shown on screen. (b) The fringe (part) captured by the camera. (c) The unwrapped phase image.
Fig. 8.
Fig. 8. Applying SPI to different image sensor pixels at five different depths. Resolution of single matrix is 21×21, which represents light transport coefficients from screen points to single image sensor pixel.
Fig. 9.
Fig. 9. The SV-PSF at five different depths. Resolution of single matrix is 13×13
Fig. 10.
Fig. 10. The light transport coefficients from different screen points to image sensor pixel (800, 600) at five different depths.
Fig. 11.
Fig. 11. The corresponding PSF of image sensor pixel (800, 600) at different depths.
Fig. 12.
Fig. 12. Interpolated PSF at depth z2 = 312 mm using measured PSF at depth z1 = 258 mm. (a) Measured PSF at depth = 312 mm. (b) Interpolated PSF at depth = 312 mm using measured light transport coefficients at depth = 258 mm. (c) Error of measured PSF and interpolated PSF.
Fig. 13.
Fig. 13. Interpolated PSF at depth z3=409 mm using measured PSF at depth z4 = 464 mm. (a) Measured PSF at depth = 409 mm. (b) Interpolated PSF at depth = 312 mm using measured light transport coefficients at depth = 464 mm. (c) Error of measured PSF and interpolated PSF.
Fig. 14.
Fig. 14. Comparison of the original sharp image, captured image, the images deblurred with measured PSF and interpolated PSF at depth z2 = 312 mm. (a) Original sharp image displayed on the LCD screen. (b) Image captured by the camera. (c) Image deblurred using measured PSF. (d) Image deblurred using interpolated PSF.
Fig. 15.
Fig. 15. Comparison of the original sharp image, captured image, images deblurred with measured PSF and interpolated PSF at depth z3 = 409 mm. (a) Original sharp image displayed on the LCD screen. (b) Image captured by the camera. (c) Image deblurred using measured PSF. (d) Image deblurred using interpolated PSF.

Tables (5)

Tables Icon

Table 1. Different depths of camera to measure PSF.

Tables Icon

Table 2. CC among images at depth z2 = 312 mm.

Tables Icon

Table 3. SSIM among images at depth z2 = 312 mm.

Tables Icon

Table 4. CC among images at depth z3 = 409 mm.

Tables Icon

Table 5. SSIM among images at depth z3 = 409 mm.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

I ( u , v ) = Ω O ( x , y ) h ( x , y , u , v ) d x d y ,
I ( u , v ) = Σ O ( x , y , z ) h ( x , y , z ; u , v ) d S ,
I ( u , v ) = Ω x y O ( x , y , z ( x , y ) ) h ( x , y , z ( x , y ) ; u , v ) 1 + ( z x ) 2 + ( z x ) 2 d x d y ,
I ( u , v ) = Ω x y O ( x , y , z ) h ( x , y , z ; u , v ) d x d y ,
h ( y 1 , z 1 ; u 0 ) = h ( y 2 , z 2 ; u 0 ) ,
y 1 y 2 y 2 y f = z 2 z 1 z f z 2 = α .
[ x 2 y 2 z 2 ] = [ α + 1 0 0 0 α + 1 0 0 0 1 ] [ x 1 y 1 z 1 ] α [ x f y f 0 ] ,
h ( x 2 , y 2 , z 2 ; u 0 , v 0 )  =  h ( x 1 , y 1 , z 1 ; u 0 , v 0 ) .
P φ ( x , y , z ; f x , f y ) = a + b cos ( 2 π f x x + 2 π f y y + φ ) ,
R φ ( u , v ; f x , f y ) = Ω P φ ( x , y , z ; f x , f y ) h ( x , y , z ; u , v ) d x d y + R n ,
H ( u 0 , v 0 ; f x , f y ) = 1 2 b [ ( R 0 R π ) + j ( R π / 2 R 3 π / 2 ) ]   = Ω h ( x , y , z ; u 0 , v 0 ) exp [ j 2 π ( f x x + f y y ) ] d x d y .
h ( x , y , z ; u 0 , v 0 ) = I F T [ H ( u 0 , v 0 ; f x , f y ) ] .
I i ( x , y ) = A ( x , y ) + B ( x , y ) cos [ Φ ( x , y ) + i 2 π / 2 π N N ]   i = 0 , 1 , 2 , , N 1 ,
R i ( u , v ) = a + b cos [ Φ ( u , v ) + i 2 π / 2 π N N ]   i = 0 , 1 , 2 , , N 1.
Φ ( u , v ) = arctan R 3 ( u , v ) R 1 ( u , v ) R 0 ( u , v ) R 2 ( u , v ) .
ϕ ( u , v ) = Φ ( x , y ) .
{ x = ϕ ( u , v ) 2 π λ ,   i f   Φ ( x , y ) = 2 π x λ y = ϕ ( u , v ) 2 π λ ,   i f   Φ ( x , y ) = 2 π y λ ,
[ x c y c z c 1 ] = [ R t 0 T 1 ] [ x y 0 1 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.