Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning based coherent diffraction imaging of dynamic scattering media

Open Access Open Access

Abstract

The ptychographic iterative engine (PIE) is a lensless coherent diffraction imaging algorithm known for its simplicity, easy to use, scalability, and fast convergence. However, practical applications often encounter interference in imaging results caused by non-static scattering media, such as dense fog, seawater target detection and medical biology diagnosis. To address this challenge, we propose a novel approach using computational deep learning for dynamic scattering medium image reconstruction, enabling lens-free coherent diffraction imaging through dynamic scattering media. Through extensive analysis, we evaluate the effectiveness of the neural network for PIE image recovery under varying scattering medium concentration conditions. We also test scattering images obtained by hybrid training with different concentrations of scattering medium to assess the generalisation ability of the neural network. The experimental results demonstrate that our proposed method achieve PIE lens-free imaging under non-static scattering media interference. This coherent diffraction imaging method, based on transmission through dynamic scattering media, opens up new possibilities for practical applications of PIE and fosters its development in complex environments. Its significance extends to fields like atmospheric pollution, seawater target detection and medical biology diagnosis, providing valuable references for research in these domains.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High-resolution imaging of objects plays a vital role in various research fields, including target detection and biomedical studies. Extracting the phase information of objects is a critical approach to enhancing image resolution. However, conventional detectors can only capture the intensity information of the beam, resulting in the loss of valuable phase distribution that carries additional object details. To overcome this limitation, phase imaging techniques have been developed, such as Hartmann sensing [1], Pyramid sensing [2], and Coherent Diffraction Imaging (CDI) [3,4]. Among these techniques, CDI lensless microscopy has gained significant attention due to its simplicity and user-friendly characteristics, making it an attractive option for high-resolution imaging.

CDI is a technique that utilizes an iterative algorithm to reconstruct high-resolution images by extracting phase information from recorded light intensity data. However, CDI suffers from a small imaging field of view and a tendency to stagnate in the reconstruction of structurally complex objects [5]. To address these issues and enhance the convergence speed and reliability of CDI, several improved techniques have been developed, including the Entity Resolution (ER) algorithm [6], the Hybrid Input-Output (HIO) algorithm, and the phase recovery technique known as ptychographic iterative engine (PIE) [7,8]. The PIE technique increases the number of known quantities through multiple lateral scans and utilizes the interference mechanisms inherent in partially overlapping positional relationships. Compared to other CDI techniques, it has achieved a qualitative improvement in convergence speed and imaging quality, with the advantages of fast convergence, strong anti-noise interference ability, and scalable imaging range [9]. As a result, it has found widespread applications in various fields, including electron beam imaging [10], biomedical imaging [11], super-resolution imaging [12], and optical component detection [13].

In practical applications, PIE technology often encounters complex dynamic scattering environments, which can interfere with the imaging results and impede the progress of PIE. For instance, in atmospheric pollution monitoring, seawater target detection, and medical biology diagnosis, scattering and absorption effects occur when the light beam passes through atmospheric turbulence, fog, or turbid solutions [14]. These effects introduce random perturbations to the wavefront carrying the target information during propagation, preventing the conventional optical imaging system from achieving a consistent pixel-to-pixel mapping between the object space and the image space [1517].

There are a number of methods that have been demonstrated to overcome scattering effects and enable imaging through scattering media, the most direct method being the use of ballistic photons [18]. However, a strong scattering medium reduces the number of ballistic photons and greatly reduces the signal strength. Some techniques also require a guide star or access to the other side of the scattering medium to characterise or invert the scattering effects prior to imaging, such as wavefront shaping techniques [19] or transmission matrix measurements [20]. Another approach relies on the memory effect of light passing through the scattering medium [21,22], treating the scattering imaging system as having a spatially translationally invariant point spread function (PSF). A regular phase retrieval algorithm is used to reconstruct the target image computed from the autocorrelation of the scattered light intensity. However, each measured PSF is only valid for the scattering characteristics at the time of measurement and is suitable for static scattering media and cannot be practically used for dynamic scattering media [23]. In the face of complex and intense scattering conditions, it is difficult for conventional scattering imaging methods to capture a clear image of the target hidden behind the scattering medium [24,25]. For dynamic scattering media, the challenge is significantly enhanced due to the time-varying scattering characteristics, so we propose to use deep neural networks to address the interference of dynamic scattering media in the imaging process [26,27].

To address the challenges posed by poor imaging quality and limited progress of PIE in dynamic scattering environments, in this paper, we present a novel deep learning-based method for dynamic scattering coherent diffraction imaging. This method aims to enhance the imaging effect of PIE when operating in such environments. Deep learning learns the mapping relationship between the input and output signals of a scattering imaging system through a network to achieve image reconstruction, which solves the problem of limited application of traditional scattering imaging methods [27]. In our experiments, we simulate the dynamic scattering medium in a real environment by adding an emulsion suspension before and after the sample. We collected 1500 degraded scattering images as the dataset for training the network. Obtaining a clear diffraction image of a target hidden behind a dynamic scattering medium by using a network to learn the mapping relations of the scattering system. The diffractograms after network reconstruction are put into PIE for operation iteration to get the PIE image in dynamic scattering environment. The effects of different concentrations of dynamic scattering media and the mixing of scattering images at each concentration on the image recovery effect of the method used are analysed. Experimental results show that the method we use has high generalisation ability and robustness. This deep learning-based dynamic scattering medium PIE imaging technique can expand the application scope and attractiveness of PIE technology, promoting its development in complex environments such as atmospheric pollution monitoring, and deep-sea target detection.

2. Theory and methods

2.1 Status of PIE development

In the traditional PIE algorithm, after obtaining the PIE scan position of the sample and the corresponding diffraction pattern, the illumination probe and the sample transmission wavefront function are separated from the diffraction pattern by the PIE algorithm, and the reconstruction of the sample transmission function is achieved by iteration. Current researchers have also improved the PIE algorithm in various ways to reduce the effect of experimental errors and improve the accuracy of the algorithm. Examples of such methods include the use of position correction algorithms to correct position errors due to mechanical scanning [28], EPIE methods to reconstruct illumination light [29], and the use of 3PIE techniques to achieve hierarchical imaging of 3D objects [30]. With the development of computer vision technology, deep learning algorithms have feature extraction capabilities and structural diversity that make them widely used in several fields. Coherent diffraction imaging is also gradually cross-fertilised with the deep learning phase [3133]. In 2019, Liu et al. presented deep learning-based coherent diffractive super-resolution imaging using generative adversarial network (GAN) to transform low-resolution images into high-resolution images with improved pixel size limits and diffraction limits [34]. In 2021, Bian et al. used a Generative Adversarial Network (GAN) colour transfer method to replace multiple illuminating light sources of different wavelengths in conventional colour PIE imaging, reducing the number of experimental steps and image data [11]. Therefore, we propose to use neural networks to solve the problem of PIE imaging in a dynamic scattering medium.

2.2 PIE recovery algorithms

The flowchart of the computational algorithm for the PIE imaging technique through a dynamic scattering medium is shown in Fig. 1. Laser beam through the beam expander beam expansion to form a uniform spot, the illumination light P(r) incident on the distribution of the sample to be measured with distribution O(r). The beam passing through the circular aperture is called a probe and is represented by the function P(rRs(j)). r(x,y) is the coordinate of the surface of the object to be measured. Rs(j) is the relative displacement of the illuminating light and the object corresponding to the S(j) diffracted spot. Using On(r) to denote a random guess for the sample to be tested. Thus the complex equation φn(r,Rs(j))=On(rP(rRs(j)) of the output wave after the sample propagates to the diffraction recording surface as ϕn(u,Rs(j))= Γ[φn(r,Rs(j))]. The CCD camera receives the diffractogram naming it Is(j). Its Is(j) recorded Fresnel diffraction pattern and is proportional to the squared modulus of the propagating diffracted wavefront. The resolution positive target is then moved to the next new position. In this new position, another part of the resolution plate is illuminated and the intensity of the diffracted spot is recorded by a CCD industrial camera. In this paper, Rs(j) is the relative vector displacement between the sample and the probe. Based on the recorded diffraction images and the sequential inter-correlation algorithm, the positional errors illuminated by the displacement platform and external environmental influences are improved in iterations.

 figure: Fig. 1.

Fig. 1. Flowchart of the computational algorithm for PIE imaging technique through dynamic scattering media.

Download Full Size | PDF

The algorithm for PIE imaging through turbid media based on deep learning is described below:

Step 1: The sample to be tested is first given an initial random guess On(r), n denotes the number of iterations, and the illuminated probe on the resolution plate is called the objective function. Initial guess P(rRs(j)), for the size of the probe in the illuminated region on the resolution plate. S(j) is the order in which the diffracted light intensity is collected for j patterns.

Step 2: Starting from diffractogram S(0), the complex amplitude distribution of the projected light field behind the resolution target is the Eq. (1).

$${\varphi _n}(r,{R_{s(j)}}) = {O_n}(r) \times P\textrm{(}r - {R_{s\textrm{(}j\textrm{)}}}\textrm{)},$$

Step 3: transferring the complex amplitude φn(r,Rs(j)) to the spot recording surface of the CCD camera. as the Eq. (2).

$${\phi _n}(u,{R_{s(j)}}) = \Gamma [{\varphi _n}(r,{R_{s(j)}})] = |{\phi _n}(u,{R_{s(j)}})|\textrm{exp} [j{\theta _n}(u,{R_{s(j)}})],$$
u is the coordinate of the spot recording surface,Γ representing the forward propagation process.

Step 4: Amplitude constraints are applied to the diffracted spot, replacing the amplitude of the calculated light field with the square root of the actual measured light intensity and preserving phase. The Eq. (3) would be got from Eq. (2),

$${\phi _{c,n}}(u,{R_{s(j)}}) = \sqrt {{I_{s(j)}}} \textrm{exp} [j{\theta _n}(u,{R_{s(j)}})],$$

Step 5: The Eq. (3) would be back-propagated to the object function, expressed as the Eq. (4)

$${\varphi _{c,n}}|r,{R_{s(j)}}|= {\Gamma ^{ - 1}}[{\phi _{c,n}}(\mu ,{R_{s(j)}})],$$

Step 6: Update the objective function On + 1(r),as the Eq. (5)

$${O_{n + 1}}(r) = {O_n}\textrm{(}r\textrm{) + }\alpha \frac{{P_n^\ast (r - {R_{s\textrm{(}j\textrm{)}}})}}{{|{P_n}(r - {R_{s\textrm{(}j\textrm{)}}})|_{\max }^2}}({\varphi _{c,n}} - {\varphi _n}),$$

Step 7: By employing sequential cross-correlation algorithm, the distortion of the recovered sample caused by positional errors is corrected [35]. Let the sample estimates calculated before and after the two iterations for a single calibre On + 1 and On + 1. The relative displacement ej,n between the two can be used to provide feedback and correct scanning errors,as the Eq. (6) . Obtained by locating the peak of the cross-correlation function e j,n,as the Eq. (7).

$${S_{j,n + 1}} = {S_{j,n}} + \beta \cdot {e_{j,n}},$$
$$C(t) = \sum\limits_r {{O_{n + 1}}(r)} {\Pi _n}(r)O_n^\ast (r - t,{s_{j,n}})\Pi _n^\ast (r - t),$$
where the parameter β is used to amplify the position error signal to control accuracy. where the binary function Π(r) is used to limit the aperture of the sample.

Step 8: Updates to the illuminated round hole probe function Pn + 1(r), as the Eq. (8).

$${P_{n + 1}}(r) = {P_n}\textrm{(}r\textrm{) + }\beta \frac{{O_n^\ast (r - {R_{s\textrm{(}j\textrm{)}}})}}{{|{O_n}(r - {R_{s\textrm{(}j\textrm{)}}})|_{\max }^2}}({\varphi _{c,n}} - {\varphi _n}),$$

For both Eq. (5) and Eq. (8) α and β denote the parameters for adjusting the convergence step. On + 1(r) represents the updated guessed target function. Pn + 1(r) represents the updated guessed illumination probe function. On + 1(r) is used as the initial input, and the above steps are repeated sequentially based on the diffraction patterns recorded by the CCD until all the diffraction patterns have been updated once, completing one iteration.

Step 9: Here we define a normalized RMS error metric function to measure the iteration as Eq. (9).

$${E_n} = \frac{{\sum\limits_j {\sum\limits_u {|\sqrt {{I_{s(j)}}} - {\phi _n}(u,{R_{s(j)}}){|^2}} } }}{{\sum\limits_j {\sum\limits_u {{I_{s(j)}}} } }}$$

When En is small enough, the iteration stops and the calculated On + 1(r) is the final PIE recovery result map.

2.3 Deep learning dynamic scattered image reconstruction

There are nowadays networks that perform scattering medium imaging through computational deep learning methods that have shown potential in many ways. After learning from the eHoloNet and U-net networks [36,37], we used an end-to-end neural network for through-scattered media imaging that is capable of learning to perform end-to-end mapping on multi-scale features of the training set of images and using these features to recover the prediction set of images. As shown in Fig. 2, the network we use consists of three main functional blocks: a convolution block, a residual block, and an upsampling block. We used a 256 × 256 diffraction scatter plot as the input to the neural network and use a convolutional neural network to recover the target image. After two convolutional layers, we use four independent paths, each with a max-pooling layer, to downsample the image using different magnitudes of 2, to create four independent data streams, and the output of each data stream is passed through four identical residual blocks, which are used to extract image features at different scales. Then after upsampling the image is recovered to 256 × 256 and then the four paths are linked into one to get the reconstructed image after four convolutional layers. In the figure below, we have used number pairs m-n to represent the input and output formats of the convolutional and upsampling layers. We also used dropout layer and batch-normalisation (BN) layer to prevent overfitting.

 figure: Fig. 2.

Fig. 2. Structure of deep learning network for dynamic scattering image reconstruction

Download Full Size | PDF

3. Experimental description

3.1 Experiment setup

As shown in Fig. 3, we used the Fresnel propagation framework to implement our coherent diffraction imaging through dynamic turbidity. We use a laser with a wavelength of 532 nm, which is collimated and expanded by a beam expander to reach the plane of the circular aperture. The beam passing through the circular aperture is referred to as a probe. The object to be tested is a resolution positive test target (USAF 1951) of size (25.4 mm in diameter, + 2 to + 7) placed on a precision mechanical translation stage with an accuracy of 0.01 mm. We use a black-and-white industrial camera CCD to capture the diffraction pattern. The detector needs to receive and record the diffraction pattern in its entirety, the diameter of the probe should be smaller than the width of the industrial camera CCD, as shown in Fig. 1, the diameter of the circular aperture is 2.6 mm, which is equivalent to 500 pixels on the CCD camera. We placed 2 glass boxes (size 5 mm × 5 mm × 50 mm) with built-in fat emulsion suspension mixed with distilled water before and after the positive test target of sample resolution to simulate the real turbid medium scene, when the light beam carrying the information in sequence through the propagation of the turbid medium becomes highly scattered, and at this time, the CCD camera can only collect the scattering diffraction map. Two glass boxes are simultaneously fixed on a mechanical displacement stage, and the two layers of dynamic scattering medium are simultaneously moved in the X-Y direction shown in the figure in order for the CCD camera to capture both clear and scattered images of the object. The dominant wavelength of the green laser is 532 nm with a power of ∼200 mW and a multimode fibre output. The beam expander is GOC-2501, which can accept light wavelength range of 450∼680 nm, and the beam expansion ratio is 2 × ∼ 6 × . Industrial camera CCD is MER-130-30UM-L, black and white camera, Daheng image, sensor is 1/1.8 MT9M001 Rolling shutter CMOS, resolution 1280 × 1024, pixel size 5.2 µm. resolution positive test target, USAF 1951, soda-lime glass, thickness 2.0 mm, use chromium for test pattern Coating. The turbid solution consists of a mixture of milk and distilled water. Due to the emulsifying effect of underwater phospholipids, a large number of tiny particles exist in the mixed solution. Keeping the solution in motion during the experiment, the multiple scattering phenomenon will randomly change the transmission direction of the transmitted light, destroying the original spatial phase information and leading to image degradation [38,39]. The scattering characteristics of the turbid solution used in the experiment are consistent with those in most dynamic scattering environments, and are suitable for various types of turbid solution environments, as well as deep-sea target detection, atmospheric pollution monitoring and other application scenarios.

 figure: Fig. 3.

Fig. 3. Structure of the experimental setup for PIE imaging through a dynamic scattering medium.

Download Full Size | PDF

Figure 4(a) shows a two-dimensional schematic of PIE imaging through a dynamic scattering medium. The gap between the resolution positive test target and the CCD of the industrial camera is 28.4 mm. The X-Y position of the circular aperture and the CCD camera is fixed, and a two-dimensional displacement stage is used to precisely move the resolution positive test target along the X-Y axis, which is equivalent to scanning one side of the resolution target with the circular probe. The aperture of the probe shown in Fig. 2(b) is 2.6 mm. When the equivalent aperture is in the first position, the resolution target is moved by 0.3 mm along the x - y direction, and two different diffractograms are recorded by the CCD camera, with the overlap of the neighbouring diffractograms at 88%. The overlapping part is used to lock the phase information at different positions, and when the resolution target moves to a new position along x - y, the CCD camera records the corresponding diffractogram.

 figure: Fig. 4.

Fig. 4. (a) Two-dimensional schematic of PIE lensless imaging through a scattering medium. (b) Diagram of the PIE equivalent aperture displacement, in practice, with the probe fixed to the CCD camera and the resolution plate moved.

Download Full Size | PDF

3.2 Data preparation

To simulate the imaging process in complex panoramic scenes, we introduced two layers of dynamic turbid media with concentrations of 0.4 ml/4.5 ml in front and behind the test sample as scattering media in the PIE imaging process. The object of data acquisition was the USAF 1951 resolution target. For network training, we placed the resolution version at a distance of 28.4 mm from the CCD camera and placed equal concentrations of dynamic scattering medium in front of and behind the samples, and moved the samples to be tested using an 8 × 8 array according to the PIE algorithm. The distance of each movement was 0.3 mm, and a total of 64 sets of PIE diffractograms with overlapping regions were collected as a test set for the network model, with an image size of 256 × 256. In order to ensure the generalisation ability of the network model, we changed the distance from the resolution plate to the CCD during the acquisition of the training set by placing the resolution plate at 35 mm, 40 mm and 45 mm away from the CCD in turn. And the samples were rotated with rotation angles of 10-degrees, 30-degrees, 50-degrees and 70-degrees in that order. At the same time, we changed the displacement distance for moving the sample from a fixed displacement distance of 0.3 mm to a random displacement distance. At this point, the collected diffraction patterns are random diffraction patterns, which are significantly different from the PIE diffraction patterns collected according to the PIE algorithm in the test set. A total of 1500 sets of images were collected as the training set for the network model, with an image size of 256 × 256.In Fresnel diffraction, changing the distance leads to a change in the diffraction fringes, therefore there is a corresponding difference in the image fringing information and graphical structure between the test set and the training set that we have captured. Figure 5 shows the training and test sets captured according to the above methodology. To assess the robustness of the network model, we conducted experiments using turbid media with three distinct concentrations and obtained separate training and test datasets for each.

 figure: Fig. 5.

Fig. 5. (a) The training set collected after changing the distance and rotation angle between the resolution board and CCD. (b) The PIE diffractograms collected according to the PIE algorithm are used as the network test set.

Download Full Size | PDF

3.3 Network training

During the training of the neural network, we define the loss function as the mean square error (MSE) between the reconstructed image and the corresponding known image:

$$\textrm{MSE = min}\frac{1}{{WHK}}\sum\limits_{n = 1}^K {\sum\limits_{j = 1}^W {\sum\limits_{c = 1}^\textrm{H} {{{({{\mathop O\limits^\sim }_n}(j,c) - {O_n}(j,c))}^2}} } }$$

In Eq. (10), W and H represent the width and height of the reconstructed image, respectively. K = 4 represents the batch size in the stochastic gradient descent (SGD) algorithm. $\tilde{O}$n(j,c) is the reconstructed image of the nth diffraction pattern, and represents the corresponding ground truth image. According to the stochastic gradient descent (SGD) method, we randomly select four images from the training dataset and calculate their mean squared error (MSE) with respect to their corresponding reconstructed images. We optimize the weights using the adaptive moment estimation (Adam) algorithm and set the learning rate to 0.01. The retention probability for the dropout layer is set to 0.9.

4. Results and discussion

4.1 PIE imaging through dynamic scattering media

The PIE imaging technique involves acquiring a series of diffractograms of the sample and subsequently reconstructing a high-resolution image of the sample from these partially overlapping diffractograms. In our experiments, we used a total 8 × 8 array to scan the entire resolution plate. The two layers of turbid medium are moved simultaneously using a two-dimensional displacement stage to acquire the scattering map of the resolution plate and the corresponding real diffraction image to better simulate the dynamic scattering environment in a real scenario. In the network training phase, we input 1500 training set images collected into the network for learning, the image size is changed to 256 × 256 pixel size, and the training time is 7 hours, which usually gets good enough convergence in 5 hours. In the use phase, we transformed a series of diffraction scattering maps required for PIE lensless imaging into diffraction maps close to the real value using the network, and then matched the clear diffraction images into PIE high-resolution images.

Figure 6 shows a comparison between the original clear image, the diffraction scattering map and the reconstructed diffractogram. It is evident that the diffractograms reconstructed by the network have high similarity with the original clear diffractograms, and the network reconstruction has obtained better results. Table 1 shows the loss function in the training of the deep network. The loss function of the network has been reduced to 0.0013 at about 1000 times of training. At this point, the network model had achieved significantly improved results, indicating successful convergence of the training process.

 figure: Fig. 6.

Fig. 6. Network reconstruction map and experimental real map. (a)∼(f) are the partially true diffractograms of the experimental samples and the corresponding scattering diffractograms and network reconstruction diagrams.

Download Full Size | PDF

Tables Icon

Table 1. Training convergence data for the deep training phase

After converting the network-reconstructed scatter diffractograms into clear diffractograms, we use the PIE technique to transform the lensless diffractograms into high-resolution real images of the objects by iteration. Figure 7 shows the PIE images formed by iterating the real diffractogram, the scattered diffractogram and the reconstructed diffractogram, respectively, through the PIE iteration, which are the real PIE high-resolution image, the scattered PIE high-resolution image and the reconstructed PIE high-resolution image, in that order. We evaluated the reconstruction results using SSIM, and the test found that the SSIM was 0.2183 for the scattered PIE image and the real PIE image, and 0.9640 for the reconstructed PIE image and the real PIE image. The SSIM values of the reconstructed PIE images are greatly improved compared to the scattered PIE images. The results show that our proposed deep learning-based coherent diffraction imaging method for dynamically scattering media obtains better imaging results in a dynamically scattering environment. Columns (a)∼(f) in Fig. 8 show clear PIE resolution images of some randomly selected regions. The first row is the original PIE micrograph; the second row is the PIE micrograph after scattering medium; and the third row is the PIE micrograph reconstructed by deep learning. It can be seen from Fig. 8 that the network reconstructed PIE image, which removes the influence of the dynamic scattering medium on the imaging results during the imaging process, retains more information about the streak structure and details of the object.

 figure: Fig. 7.

Fig. 7. Comparison of experimental real PIE lens-free image, scattered PIE lens-free image, and reconstructed PIE lens-free image, and the number in the upper left corner is the structural similarity of the images.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Real PIE lensless map, scattered PIE lensless map, reconstructed PIE lensless map for randomly selected experimental sample regions.

Download Full Size | PDF

4.2 Image reconstruction for different concentrations of scattering media

In our experiments, we found that as the concentration of the turbid medium increased, the number of scattering particles in the solution also increased, which in turn led to the more pronounced scattering we recorded. In order to better simulate the effect of dynamically scattering turbid medium on the imaging system in a real scene. We collected scattering diffractograms under three different concentration conditions and investigated the effect of scattering medium on image reconstruction at different concentrations. The three scattering concentrations were 0.5 ml/4.5 ml, 0.7 ml/45 ml, and 0.9 ml/4.5 ml, and were named case1, case2, and case3 in that order. According to the above dataset acquisition method, three datasets with different scattering concentrations are prepared, where each dataset has a training set of 1500 images and a test set of 64 images. The effect of scattering medium concentration on the network recovery effect is compared by reconstructing the images through PIE.As shown in Fig. 9, case1∼case3 are the high-resolution images of clear diffractograms, scattered diffractograms, and reconstructed diffractograms at different media concentrations after PIE iteration, respectively. Table 2 shows the values of SSIM, PSNR, and MSE of PIE images at three different concentrations for case1∼ case3.

 figure: Fig. 9.

Fig. 9. case1∼case3 show the real PIE lensless map, the scattered PIE lensless map, and the reconstructed PIE lensless map, respectively, under the condition of transmitting through different concentrations of turbid medium.

Download Full Size | PDF

Tables Icon

Table 2. SSIM, MSE, PSNR for three concentrations of dynamic scattering medium conditions

As shown in Fig. 9, as the concentration of the turbid medium increases, the high-resolution image of the scattered diffraction map acquired by the CCD after PIE matching becomes more blurred. The overall contour of the image is difficult to distinguish, the features are not obvious, and the loss of details is more serious. The PIE images reconstructed by the network were trained using the same learning rate and number of iterations under three different concentration conditions, all of which resulted in high fidelity images. As the concentration decreases, the reconstructed PIE high-resolution images are richer in detail information. From Table 2, it can be seen that the SSIM, MSE, and PSNR of the network reconstructed PIE images increase as the concentration of scattering medium decreases. The PIE reconstructed images at all three concentrations obtain high evaluation indexes, and the overall structure and detail information of the object are retained intact. These findings highlight the effectiveness of our deep learning-based method in mitigating the adverse effects of dynamic scattering media during coherent diffraction imaging. The approach excels in capturing and preserving the details of objects even in challenging scattering conditions.

Furthermore, we recognize that in practical scenarios, the turbidity concentration of a turbid medium may not constant in some scenarios. To address this variability and better simulate the situation, collected 500 images from each of the three different scattering medium concentrations from case1 to case3 to produce a dataset of size 1500, which was used as a training set to train the network model, and tested it using a test set of case1 concentration. The results of the PIE iteration at case1 concentration are shown in Fig. 10, and the SSIM value of the PIE reconstructed image is as high as 0.9426. The experimental results show that the method we used is still able to maintain a good recovery effect in a mixed environment of multiple concentrations. Through rigorous analysis of the similarity of PIE reconstructed images under different scattering medium concentrations and mixed scattering medium concentrations, our deep learning-based coherent diffraction imaging algorithm for dynamic scattering medium showcases high-quality PIE imaging results. Its robustness and generalization capabilities are well-demonstrated, making it a promising tool for achieving high-resolution imaging in dynamically turbid media.

 figure: Fig. 10.

Fig. 10. Recovering Real PIE Images, Scattered PIE Images, Reconstructed PIE Images at Case1 Concentration Using Mixed Training Sets of Three Different Concentrations of Scattering Media. The number in the upper right corner is the structural similarity of the image.

Download Full Size | PDF

5. Conclusion

In this paper, a novel deep learning-based coherent diffraction imaging technique for dynamic turbid media is proposed to provide a feasible method to solve the PIE imaging problem in dynamic scattering environments, which extends the practical application scenarios of the PIE imaging technique. The conventional PIE imaging process usually requires minimizing the impact of external noise on the diffractogram captured by the CCD to achieve better high-resolution images. Our method introduces a dynamic scattering medium into the PIE lensless imaging process, simulates scattering phenomena in real complex scenes. By employing a convolutional neural network, we achieve super-resolution imaging of PIE in turbid media. The integration of this method extends the applicability and appeal of PIE lensless imaging and promotes the advancement of PIE technology in more complex environments. For training the deep learning network, a resolution positive target was used, enabling high-resolution imaging of PIE through dynamic scattering media. The proposed method was also tested under different scattering media concentration and mixed scattering media concentration conditions. The performance of the PIE reconstructed images was evaluated using SSIM, resulting in 0.9640 for the PIE reconstructed image through the dynamic scattering medium. The experimental results demonstrate the high generalisation ability and image fidelity of our approach, effectively removing the impact of dynamic scattering media on the PIE imaging results. Future studies can explore various types of scattering media and different concentrations of scattering media environments, further expanding the application scope of our proposed method to a wider range of scattering scenarios. Its practical significance is evident in atmospheric pollution, seawater target detection and medical biology diagnosis.

Funding

Zhejiang A&F University (2022LFR030); Department of Education of Zhejiang Province (Y202249432).

Acknowledgements

This work has been supported by Department of Education of Zhejiang Province funds (Y202249432) and Scientific Research Development Fund Project of the Zhejiang A&F University funds(2022LFR030).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

1. L. Gilles and B. L. Ellerbroek, “Real-time turbulence profiling with a pair of laser guide star Shack–Hartmann wavefront sensors for wide-field adaptive optics systems on large to extremely large telescopes,” J. Opt. Soc. Am. A 27(11), 1 (2010). [CrossRef]  

2. J. Costa, “Modulation effect of the atmosphere in a pyramid wave-front sensor,” Appl. Opt. 44(1), 60–66 (2005). [CrossRef]  

3. M. Li, L. Bian, and J. Zhang, “Multi-slice coded coherent diffraction imaging,” Optics and Lasers in Engineering 151, 106929 (2022). [CrossRef]  

4. X. Zhou, X. Wen, Y. Ji, Y. Geng, S. Liu, and Z. Liu, “Fast automatic multiple positioning for lensless coherent diffraction imaging,” Optics and Lasers in Engineering 155, 107055 (2022). [CrossRef]  

5. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 1 (1982). [CrossRef]  

6. T. R. Crimmins, J. R. Fienup, and B. J. Thelen, “Improved bounds on object support from autocorrelation support and application to phase retrieval,” J. Opt. Soc. Am. A 7(1), 3–13 (1990). [CrossRef]  

7. J.T. Dou, T.Y. Zhang, C. Wei, Z.M. Yang, Z.S. Gao, J. Ma, J.X. Li, Y.Y. Hun, and D. Zhu, “Single-shot ptychographic iterative engine based on chromatic aberrations,” Opt. Commun. 440, 139–145 (2019). [CrossRef]  

8. C. Xu, A. Cao, H. Pang, Q. Deng, S. Hu, and H. Yang, “Lensless imaging via multi-height mask modulation and ptychographical phase retrieval,” Optics and Lasers in Engineering 169, 107739 (2023). [CrossRef]  

9. H.M.L. Faulkner and J.M. Rodenburg, “Movable Aperture Lensless Transmission Microscopy: A Novel Phase Retrieval Algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

10. D.J. Batey, D. Claus, and J.M. Rodenburg, “Information multiplexing in ptychography,” Ultramicroscopy 138, 13–21 (2014). [CrossRef]  

11. Y. Bian, Y. Jiang, J. Wang, S. Yang, W. Deng, X. Yang, R. Shen, H. Shen, and C. Kuang, “Deep learning colorful ptychographic iterative engine lens-less diffraction microscopy,” Optics and Lasers in Engineering 150, 1 (2022). [CrossRef]  

12. A.M. Maiden, M.J. Humphry, F. Zhang, and J.M. Rodenburg, “Superresolution imaging via ptychography,” J. Opt. Soc. Am. A 28(4), 604–612 (2011). [CrossRef]  

13. H.Y. Wang, C. Liu, S.P. Veetil, X.C. Pan, and J.Q. Zhu, “Measurement of the complex transmittance of large optical elements with Ptychographical Iterative Engine,” Opt. Express 22(2), 2159 (2014). [CrossRef]  

14. J. Tian, Z. Murez, T. Cui, Z. Zhang, and R. Ramamoorthi, “Depth and Image Restoration from Light Field in a Scattering Medium, 2017 IEEE International Conference on Computer Vision (ICCV) (2017).

15. X. Pei, H. Shan, and X. Xie, “Super-resolution imaging with large field of view for distant object through scattering media,” Optics and Lasers in Engineering 164, 107502 (2023). [CrossRef]  

16. A.K. Pediredla, S. Zhang, B. Avants, F. Ye, and A. Veeraraghavan, “Deep Imaging in Scattering Media with Single Photon Selective Plane Illumination Microscopy (SPIM),” J Biomed Opt 1, 1–4 (2016). [CrossRef]  

17. R. Lan, H. Wang, S. Zhong, Z. Liu, and X. Luo, “An integrated scattering feature with application to medical image retrieval,” Computers & Electrical Engineering 1, S0045790617333694 (2018). [CrossRef]  

18. L. Wang and P.P. Ho, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 1, 1 (1991). [CrossRef]  

19. A. Sanjeev, Y. Kapellner, N. Shabairou, E. Gur, M. Sinvani, and Z. Zalevsky, “Non-invasive imaging through scattering medium by using a reverse response wavefront shaping technique,” Scientific Reports 10, 12775 (2020). [CrossRef]  

20. S.M. Popoff, G. Lerosey, R. Carminati, M. Fink, A.C. Boccara, and S. Gigan, “Measuring the Transmission Matrix in Optics : An Approach to the Study and Control of Light Propagation in Disordered Media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

21. S. Feng, C. Kane, P.A. Lee, and A.D. Stone, “Correlations and Fluctuations of Coherent Wave Transmission Through Disordered Media,” Phys. Rev. Lett. 61(7), 834–837 (1988). [CrossRef]  

22. D. Lu, M. Liao, W. He, G. Pedrini, W. Osten, and X. Peng, “Tracking moving object beyond the optical memory effect,” Optics and Lasers in Engineering 124, 105815 (2020). [CrossRef]  

23. J. Bertolotti, E.G. van Putten, C. Blum, A. Lagendijk, W.L. Vos, and A.P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

24. Ori Katz, Pierre Heidmann, Mathias Fink, and Sylvain Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 1(1), 1 (2014). [CrossRef]  

25. A. Eric and M. Gilles, Mesoscopic Physics of Electrons and Photons (Cambridge University Press, 2007).

26. G. Li, W. Yang, H. Wang, and G. Situ, “Image Transmission through Scattering Media Using Ptychographic Iterative Engine,” Appl. Sci. 9(5), 1 (2019). [CrossRef]  

27. Y. Sun, J. Shi, L. Sun, J. Fan, and G. Zeng, “Image reconstruction through dynamic scattering media based on deep learning,” Opt. Express 27(11), 16032–16046 (2019). [CrossRef]  

28. A.M. Maiden, M.J. Humphry, M.C. Sarahan, B. Kraus, and J.M. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]  

29. A.M. Maiden and J.M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

30. T.M. Godden, R. Suman, and M. J. Humphry, “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22(10), 12513–12523 (2014). [CrossRef]  

31. R. Harder, “Deep neural networks in real-time coherent diffraction imaging,” IUCrJ 8(1), 1–3 (2021). [CrossRef]  

32. D. Yin, Z.Z. Gu, Y.R. Zhang, F.Y. Gu, S.P. Nie, S.T. Feng, J. Ma, and C.J. Yuan, “Speckle noise reduction in coherent imaging based on deep learning without clean data,” Optics and Lasers in Engineering 133, 1 (2020). [CrossRef]  

33. R.Q. Liu, C. Peng, X.Y. Liang, and R.X. Li, “Coherent beam combination far-field measuring method based on amplitude modulation and deep learning,” Chin. Opt. Lett. 18, 1 (2020). [CrossRef]  

34. T. Liu, K. De Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 1 (2019). [CrossRef]  

35. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I.K. Robinson, and J.M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21(11), 13592–13606 (2013). [CrossRef]  

36. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

37. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

38. L. Spinelli, F. Martelli, A. Farina, A. Pifferi, and G. Zaccanti, “Calibration of scattering and absorption properties of a liquid diffusive medium at NIR wavelengths. CW method,” Opt. Express 15(11), 6589–6604 (2007). [CrossRef]  

39. R. Michels, F. Foschum, and A. Kienle, “Optical properties of fat emulsions,” Opt. Express 16(8), 5907–5925 (2008). [CrossRef]  

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Flowchart of the computational algorithm for PIE imaging technique through dynamic scattering media.
Fig. 2.
Fig. 2. Structure of deep learning network for dynamic scattering image reconstruction
Fig. 3.
Fig. 3. Structure of the experimental setup for PIE imaging through a dynamic scattering medium.
Fig. 4.
Fig. 4. (a) Two-dimensional schematic of PIE lensless imaging through a scattering medium. (b) Diagram of the PIE equivalent aperture displacement, in practice, with the probe fixed to the CCD camera and the resolution plate moved.
Fig. 5.
Fig. 5. (a) The training set collected after changing the distance and rotation angle between the resolution board and CCD. (b) The PIE diffractograms collected according to the PIE algorithm are used as the network test set.
Fig. 6.
Fig. 6. Network reconstruction map and experimental real map. (a)∼(f) are the partially true diffractograms of the experimental samples and the corresponding scattering diffractograms and network reconstruction diagrams.
Fig. 7.
Fig. 7. Comparison of experimental real PIE lens-free image, scattered PIE lens-free image, and reconstructed PIE lens-free image, and the number in the upper left corner is the structural similarity of the images.
Fig. 8.
Fig. 8. Real PIE lensless map, scattered PIE lensless map, reconstructed PIE lensless map for randomly selected experimental sample regions.
Fig. 9.
Fig. 9. case1∼case3 show the real PIE lensless map, the scattered PIE lensless map, and the reconstructed PIE lensless map, respectively, under the condition of transmitting through different concentrations of turbid medium.
Fig. 10.
Fig. 10. Recovering Real PIE Images, Scattered PIE Images, Reconstructed PIE Images at Case1 Concentration Using Mixed Training Sets of Three Different Concentrations of Scattering Media. The number in the upper right corner is the structural similarity of the image.

Tables (2)

Tables Icon

Table 1. Training convergence data for the deep training phase

Tables Icon

Table 2. SSIM, MSE, PSNR for three concentrations of dynamic scattering medium conditions

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

φ n ( r , R s ( j ) ) = O n ( r ) × P ( r R s ( j ) ) ,
ϕ n ( u , R s ( j ) ) = Γ [ φ n ( r , R s ( j ) ) ] = | ϕ n ( u , R s ( j ) ) | exp [ j θ n ( u , R s ( j ) ) ] ,
ϕ c , n ( u , R s ( j ) ) = I s ( j ) exp [ j θ n ( u , R s ( j ) ) ] ,
φ c , n | r , R s ( j ) | = Γ 1 [ ϕ c , n ( μ , R s ( j ) ) ] ,
O n + 1 ( r ) = O n ( r ) +  α P n ( r R s ( j ) ) | P n ( r R s ( j ) ) | max 2 ( φ c , n φ n ) ,
S j , n + 1 = S j , n + β e j , n ,
C ( t ) = r O n + 1 ( r ) Π n ( r ) O n ( r t , s j , n ) Π n ( r t ) ,
P n + 1 ( r ) = P n ( r ) +  β O n ( r R s ( j ) ) | O n ( r R s ( j ) ) | max 2 ( φ c , n φ n ) ,
E n = j u | I s ( j ) ϕ n ( u , R s ( j ) ) | 2 j u I s ( j )
MSE = min 1 W H K n = 1 K j = 1 W c = 1 H ( O n ( j , c ) O n ( j , c ) ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.