Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep-learning-augmented microscopy for super-resolution imaging of nanoparticles

Open Access Open Access

Abstract

Conventional optical microscopes generally provide blurry and indistinguishable images for subwavelength nanostructures. However, a wealth of intensity and phase information is hidden in the corresponding diffraction-limited optical patterns and can be used for the recognition of structural features, such as size, shape, and spatial arrangement. Here, we apply a deep-learning framework to improve the spatial resolution of optical imaging for metal nanostructures with regular shapes yet varied arrangement. A convolutional neural network (CNN) is constructed and pre-trained by the optical images of randomly distributed gold nanoparticles as input and the corresponding scanning-electron microscopy images as ground truth. The CNN is then learned to recover reversely the non-diffracted super-resolution images of both regularly arranged nanoparticle dimers and randomly clustered nanoparticle multimers from their blurry optical images. The profiles and orientations of these structures can also be reconstructed accurately. Moreover, the same network is extended to deblur the optical images of randomly cross-linked silver nanowires. Most sections of these intricate nanowire nets are recovered well with a slight discrepancy near their intersections. This deep-learning augmented framework opens new opportunities for computational super-resolution optical microscopy with many potential applications in the fields of bioimaging and nanoscale fabrication and characterization. It could also be applied to significantly enhance the resolving capability of low-magnification scanning-electron microscopy.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical microscopes often suffer from limited spatial resolution for imaging objects with nanoscale dimensions due to light diffraction. Over the past decades, the exploration of new optical super-resolution microscopies has attracted persistent interest with a goal of visualizing subwavelength features, to break through the diffraction limit. The most popular super-resolution approaches are the far-field fluorescence-based microscopies. The diversified fluorescence super-resolution microscopies, such as PALM [1], STORM [2], STED [3], and SIM [4,5], have been the most widespread bioimaging tools that achieve the state-of-the-art spatial resolution of a few nanometers. These fluorescence microscopies collect the sparse photoactivatable fluorophores to resolve spatial details of tightly packed molecules, thus the samples must be dyed with special fluorescent labels. This imposes restrictions on imaging inorganic samples. This drawback of current fluorescence microscopies hinders their applications into the fast-developing fields of nano-fabrications and integrated devices, which normally deal with inorganic artificial and regular nanostructures. For these artificial samples, a simple super-resolution optical imaging method working in a non-intrusive, wide-field and far-field way is still under exploration.

In principle, the conventional far-field optical imaging systems are constrained inevitably by the Abbe’s diffraction limit, which arises essentially from the loss of image information carried by exponentially decaying evanescent waves at the sample surface. Some optical “hardware” innovation, such as the super-lens [69] that can collect full-wave information, are promising to break the resolution constraint. However, at present, these metamaterial optical elements are still far from practical applications with many technical challenges unsolved, e.g., the quasi near-field work distance. In addition to these instrumental attempts, computational methods have also played an important role in the super-resolution reconstruction of optical micrographs [10,11]. For example, the optical blurring of fluorescent images can be formulated by the convolution of specimens’ scattering fields with the point spread function of an incoherent imaging system. Applying appropriate deconvolution algorithms, the fine structures of objects can be recovered inversely from their blurry images. For coherent imaging systems, many deconvolution algorithms, such as bandwidth extrapolation [12] and compressed sensing [13,14], have been proposed for super-resolution cognition of subwavelength structures based on the Fourier optics and information theory. The physical limit on the spatial resolution that can be recovered from far-field imaging is not necessarily close to the Abbe’s criterion, as discussed in a recent work [15]. However, these semi-analytical algorithms show limited super-resolving capability and applicability when addressing the highly nonlinear and ill-posed inverse problems of deep-subwavelength imaging.

In recent years, deep learning has emerged as a versatile and potent tool for tackling complex and previously intractable computational inverse problems in image processing, such as super-resolution reconstruction [16,17], classification or segmentation [18], phase retrieval [1921], tomography [22,23], and holography imaging [2426]. Deep learning methods are also very powerful for enhancing the spatial resolution of conventional optical microscopies, such as wide-field microscopy [27] and scanning confocal microscopy [28]. Nevertheless, the networks for the inverse image recovery must be pre-trained with extensive prior-known data. Therefore, the data-driven model lacks generality. Moreover, the “black box” nature of deep-learning networks raises concerns on the completeness of underlying physics that the network can conclude. Consequently, more studies aim to incorporate physical constraints into the deep neural architectures, with the goal of reducing the reliance on extensive training datasets [2933]. For some nanostructures with regular shapes (nanowires, nanodisks, nanospheres, etc.), their far-field scattering fields depicted by intensity, phase, polarization, and spectral distributions can be precisely reconstructed by some physical models. This provides an ideal platform for modelling deep-learning networks with informed physics.

Here we present a deep-learning convolutional neural network (CNN) architecture for the deconvolution of optical microscopy images of both regularly arranged and randomly clustered metal nanostructures, achieving the super-resolution recovery of their real morphologies. Firstly, randomly distributed gold nanoparticles (AuNPs) are taken as an example to demonstrate the proposed CNN algorithm. These noble metal nanoparticles have many important applications in biosensing and bioimaging [34]. The optical micrographs of such AuNPs recording the grayscale intensities of their scattering fields are used as the input images, while their scanning-electron microscope (SEM) images captured at 10,000-fold magnification are treated as ground truths. Leveraging an extensive training dataset, the constructed CNN is learned to transform accurately the blurry optical images of randomly clustered AuNPs into SEM-like high-resolution images, where closely spaced nanoparticles can be well distinguished. Then we extend the same network to recover the optical images of randomly cross-linked silver nanowires. Most sections of these intricate nanowire nets can be reconstructed with only a slight discrepancy near their intersections. The deep learning method demonstrated here allows direct deconvolution of wide-field optical micrographs into super-resolution images with deep-subwavelength resolution.

2. Architecture design of our deep convolutional neural network

We take gold nanoparticles (AuNPs) as an example to demonstrate the deep-learning deconvolution model for the super-resolution reconstruction of optical images. The optical micrographs of the AuNPs were collected by a scientific camera mounted on a dark-field optical microscope, as schematically shown in Fig. 1(a). From the diffraction patterns of AuNPs (Fig. 1(b)), we can see that the closely spaced AuNP dimers are unable to be distinguished if the inter-particle distances are below the Rayleigh criterion (Δ = 0.61λ/NA). However, these intensity profiles of diffraction patterns still present many differences, providing much information for the reverse recovery of the nanoparticle arrangement. We designed a CNN to establish the nonlinear projection between optical diffraction images and particle dimensions. The proposed network architecture incorporates an explicit non-local prior by a non-local denoising module and an implicit non-local prior by a U-net structure, as shown in Fig. 2(a) [35], which is built on the four-scale encode-decoder U-net with a skip connection that exists in each scale. In the third scale, we plug the non-local denoising module for the encoder and decoder. For the operations of down-sampling and up-sampling in each scale, we simply use stride convolution layers of kernel size 3 × 3 and ConvTranspose layers of kernel size 2 × 2, respectively. The kernel sizes of other convolution layers are set to 3 × 3, following common settings. The number of feature maps is set to 32, 64, 128, and 256 in each scale. All the convolutional layers (except the last reconstruction layers) are followed by rectified linear units (ReLU) activation function.

 figure: Fig. 1.

Fig. 1. (a) Schematic of a conventional dark-field microscope for optical image acquisition. (b) Blurry optical images of AuNP dimers with varied spacing distance, collected by a CMOS camera mounted in the microscope. The light source used in the optical microscope is a halogen lamp which emits broadband light from 300 nm to 2000nm. Here a 633 nm narrow-band filter is added during image acquisition. (c) The corresponding SEM images of dimers.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. CNN framework for deep-learning deconvolution microscopy. (a) The architecture of the proposed convolution neural network for image transformation and registration. (b) The input optical images of AuNPs with a field of view of 10.8 × 10.8 µm2, the inset is the high-resolution TEM images of the AuNPs. The scale bar represents 200 nm. (c) The output SEM-like super-resolution images of AuNPs reproduced by the CNN network.

Download Full Size | PDF

The proposed network is then trained by the input optical images of randomly distributed gold nanospheres collected at 100X magnification (Fig. 2(b)). The illumination light source is a halogen lamp which emits broadband light from 300 nm to 2000nm. Note that the CMOS camera collecting the micrographs is only sensitive to the light in the wavelength range of 400-900 nm, which is broadband sufficient to eliminate the diffraction rings around nanoparticle for the convenience of network training. To obtain ground-truths, we acquired the SEM images of these AuNPs measured at 10,000X magnification. At this SEM magnification level, individual or clustered nanospheres can be well resolved, and the field of view of the SEM image is comparable to that of optical images. Thus, we can collect an adequate dataset of wide-field optical micrographs of randomly distributed AuNP multimers as input and their SEM images as ground-truths. The transmission-electron microscopic (TEM) image of the AuNPs illustrates their dimensional uniformity (the inset of Fig. 2(b)). The pre-processing of the experimental micrographs is performed for the purpose of image alignment and registration. The optical and SEM images were tailored into 480 pairs of non-overlapping subpictures of 400 × 400 pixels, among which 360 pairs are used for training and the remaining for testing. More details on sample preparation, image acquisition and renormalization are described in appendix section.

With the designed network architecture and the prior known datasets, the model parameters can be updated through the optimization of a loss function on a training set. We initialize the convolutional filters using the method of He et al [36], and adopt the Adam algorithm [37] to optimize the trainable parameters by minimizing the L1 loss function. Due to the local connectivity and shared weights properties of the proposed CNN model, we can use small patches for training. The network is then trained for 120 epochs. In each epoch, we randomly crop 32 × 3,000 patch pairs of size 96 × 96 from the training dataset. The learning rate starts from 10−4, then decreases by half every 30 epochs. The mini-batch size is set to 32 and other hyper-parameters of the Adam algorithm are taken by their default settings.

3. Super-resolution of nanoparticle multimers and nanowires

The proposed CNN network is well trained to transform the input optical micrographs of randomly distributed AuNP clusters into de-blurred output images that have the SEM-level resolution (Fig. 2(c)). We first select some AuNP dimers from the testing dataset for analysis. The input optical images of the dimers with inter-particle distances varied from 208 nm to 928 nm are shown in Fig. 3(a1-d1). In the optical micrographs, the closest-spaced dimer only exhibits one elliptical bright spot (Fig. 3(a1)), while two spherical spots gradually emerge and become distinguishable with the increase of dimer distances (Fig. 3(b1-d1)). Surprisingly, the output images from the trained CNN network present explicit non-diffracted spots (Fig. 2(a2-d2)), which are comparable to the ground truth SEM images (Fig. 3(a3-d3)). The spot size of an isolated nanosphere is ∼700 nm in the wide-field optical images (Fig. 3(d1)), whereas the corresponding sizes in the network outputs and the ground-truth SEM images are both ∼200 nm. Consequently, the deep-learning network can reconstruct diffraction-limited optical images of the AuNP dimers into SEM-level super-resolution images. To visually illustrate the accuracy of the deconvolution algorithm, we extracted the line profiles of grayscale intensities across the midpoint of each dimer in the input, output, and ground truth images, as compared in Fig. 3(a4-d4), further corroborating the super-resolving capability of our CNN model.

 figure: Fig. 3.

Fig. 3. Super-resolution imaging of AuNP dimers reconstructed by deep-learning algorithms. (a1-d1) The input blurry optical images of AuNP dimers with varying inter-particle distances, acquired on a conventional dark-field microscope. (a2-d2) The corresponding deconvolution images reconstructed by the trained CNN network. (a3-d3) The SEM images of AuNP dimers used as ground truths. These images are renormalized or rescaled. (a4-d4) Comparison of light-intensity profiles along the dimer axis between the input (black), output (green) and ground truth images (red).

Download Full Size | PDF

The proposed CNN deconvolution method is also qualified for more challenging cases of super-resolution reconstruction. As an example, we randomly select three groups of the input, output, and ground truth images containing the aggregated dimers, trimers, tetramers, etc., as shown in Fig. 4(a1-c1, a2-c2, a3-c3). Some representative zoom-in regions of multimers are depicted in Fig. 4(d). For most dimers and trimers with random orientations, their profiles and dimensions can be reconstructed well after the network computation. Especially, the CNN framework can recognize the minor difference in the optical images between the dimer (region 1) and the closely packed trimer (region 2), their far-field intensity profiles look like each other. For the catenulate trimer (region 3) and the tetramer (region 4), the bright spots in the optical micrograph become elliptic, exhibiting more differences for a higher identification efficiency. However, for AuNP clusters with more constituent nanospheres, such as the hexamer marked by the red square in Fig. 4(a2, a3), the discrepancy between the output images and the ground truths becomes apparent, particularly in the misjudgement of particle numbers and arrangement. It is reasonable to expect a decrease in accuracy as the number of particles in clusters increases for the CNN algorithm. This is because the optical patterns of multimers become more complex and there are fewer such clusters available for network training.

 figure: Fig. 4.

Fig. 4. Super-resolution imaging of randomly distributed AuNP multimers (N ≤ 4). (a1, b1, c1) The input wide-field optical images. (a2, b2, c2) The corresponding output deconvolution images and (a3, b3, c3) the ground-truth SEM images. (d) Zoomed-in regions of selected multimers bounded by green squares.

Download Full Size | PDF

To quantitatively evaluate the recognition accuracy of the proposed CNN model, we randomly chose 100 sets of isolated AuNP clusters tailored from the testing dataset for statistics. The output and SEM images of these complicated multimers (N ≥ 2) are merged into some kaleidoscopic images, as shown in Fig. 5. After careful observation, we can conclude that the computational discrepancies of the CNN model can be categorized into two classes - the inconsistency of particle positions and the misjudgement of particle number. Here, to include both kinds of mistakes, the correlation coefficient R between the network outputs and ground-truth SEM images is defined as the criterion of the recognition accuracy, which is formulated by

$$R({I_1},{I_2}) = {\mathop{\rm cov}} ({I_1},{I_2})/\sqrt {{\mathop{\rm cov}} ({I_1},{I_1}){\mathop{\rm cov}} ({I_2},{I_2})}$$
where the cov(I1,I2) is the covariance of the sample dataset, I1 is the grayscale intensity of the output images and I2 is the intensity of the ground-truth SEM images with zero-padding beyond the nanoparticle domains. The histograms of R for every group of isolated nanoparticle clusters are shown in Fig. 6(a). For monomers and dimers, the correlation coefficients are higher than 0.8 for most cases, while R gradually decreases with the increase of particle number. The average R for the randomly selected 100 sets of isolated multimers explicitly shows this variation (Fig. 6(c)), as a consequence of the decrease of corresponding multimers for network training (Fig. 6(b)). More paired data used for the network training would result in higher accuracy.

 figure: Fig. 5.

Fig. 5. The kaleidoscopes of 100-pair randomly selected AuNP multimers. (a1-d1) The output images from the trained CNN network for dimers (a1), trimers (b1), tetramers (c1), and more complicated AuNP clusters (d1). (a2-d2) The corresponding ground-truth SEM images.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. The accuracy rate of transforming the optical images of AuNP multimers by the trained CNN framework. (a) The counted distributions of the correlation coefficient for multimers with varied particle numbers. (b) The average correlation coefficient between the output images and ground-truth SEM images of AuNP multimers. (c) The proportion of AuNP multimers with different particle numbers.

Download Full Size | PDF

So far, our deep-learning model has been demonstrated its efficiency in addressing image-recovering problems of AuNP clusters. However, the versatility of the proposed CNN algorithms needs further verification for other-type nanostructures. For this purpose, we have also employed our deep-learning model to reconstruct the morphologies of randomly cross-linked silver nanowires from their blurry optical images (Fig. 7). The silver nanowires have a diameter of ∼100 nm and a length of ∼10 µm. These nanowires were spin-coated on a glass substrate following the same experimental procedures described in Appendix A. Their optical and SEM images were measured under the same conditions as that of the AuNP experiments.

 figure: Fig. 7.

Fig. 7. The CNN reconstruction of nanowire morphologies from their optical images. (a) The input optical image of silver nanowires for testing, collected under the same conditions as that of nanoparticle experiments. (b) The output testing micrograph of nanowires. (c) The SEM images of silver nanowires. The scale bar represents 10 µm. (d, e) The zoomed-in images outlined by blue and red dashed squares in (b). (f) The loss curve during the network training.

Download Full Size | PDF

The silver nanowires were randomly distributed across the substrate surface as seen from the optical image shown in Fig. 7(a). The nanowire net exhibits numerous interactions which have inevitable influence on the resulting optical diffraction pattern. Using the trained CNN algorithm, as shown in Code 1 (Ref. [38]), the output image of testing nanowires can be reconstructed, as shown in Fig. 7(b). Compared with the ground-truth SEM image in Fig. 7(c), it is evident that majority of sections of the intricate nanowire net have been accurately recovered. However, minor discrepancies are noticeable near the intersection areas of nanowire net. Besides, thinner nanowires within certain densely packed regions are prone to misjudgement, resulting in subtle ripples in output nanowires. To illustrate these challenges more distinctly, we provide zoom-in pictures of two specific local regions, as shown in Fig. 7(d) and 7(e). Moreover, we present the loss curve observed during the network training, showing the gradual decreases and convergence of the loss with increasing epochs (Fig. 7(f)). Note that the motivation to design this CNN algorithm in this work is to improve the resolution of optical microscopies for common artificial nanostructures with regular shapes (disks, spheres, and wires, etc.), because these geometries are frequently used to construct more complex nano-patterns in the fields of nanomaterials, nanophotonics, nanoelectronics and even nano-mechanics. Additional modifications and optimizations on the present network are expected to produce better applicability to other nanostructures with irregular shapes and sizes.

4. Discussion and conclusion

Besides the problems of data acquisition for network training, well-designed CNN also plays a vital role to de-convolve computational images into high-resolution ones. Directly applying the common full CNN architectures may suffer from the problems of generality that the trained CNN models are heavily data-dependent and lack model interpretability. A solution to tackle this issue is to consider expert domain knowledge during the CNN design [39]. Such knowledge-based CNN is expected to have higher generality while using sparse training data, with known physics priors as a means to constrain the training [32]. Among various kinds of domain knowledge, non-local self-similarity has been extensively studied as it is a generic prior for most image restoration tasks [35]. Recently, it has been pointed out that U-net architectures with skip connections naturally impose self-similarity across multiple scales [40,41]. On the other hand, non-local denoising modules, which can be fused into existing CNN for end-to-end training, have been proposed to exploit this intrinsic property of non-local self-similarity across spatial domains.

Based on the aforementioned analysis, we present a deep non-local U-net which incorporates the non-local denoising module into the well-known U-net architecture. This integration enables our network to implicitly incorporate self-similarity priors across multiple scales with the U-net structure, and explicitly reveal the non-local self-similarity across the single scale via the non-local denoising module. Our proposed network proves especially effective for the specific task of transforming diffraction-limited optical images to SEM-like super-resolution images. Leveraging the robust learning capabilities of our well-designed deep CNN structure and an extensive dataset, the advanced deep CNN excels at accurately estimating images with SEM-level super-resolution from their diffraction-limited optical counterparts. To validate the capabilities of our deep learning model, we've employed aggregated AuNPs clusters and silver nanowires as a prime example for super-resolution imaging. The resulting images, showcasing the desired super-resolution of these AuNPs clusters, facilitate the study of homogeneity in plasmonic sensors without the need for electron microscopy, thereby preserving the integrity of sensing surface molecules. In summary, our work introduces a deep-learning-enabled microscopic technique of surpassing the diffraction limit, thus opening up new possibilities for the applications of deep-learning augmented computational microscopies in the nanoscience and nanofabrication industries.

Appendix A: sample preparation

The mono-dispersive AuNPs with a diameter of around 200 nm were prepared by a wet-chemistry seed-mediated growth method in conjunction with the mild oxidation processes [42]. Following the experimental processes below, dispersive AuNPs will cluster into dimer, trimer, and tetramer for the most part. Firstly, the water solution of AuNPs was centrifuged at a speed of 2000 r/min for 5 minutes. After the centrifugation was finished, AuNPs clustered and turned into precipitates. The clear water on the top layer of the original solution was removed carefully and fresh deionized water of the same volume was added in again. Then, the AuNPs were re-dispersed by the ultrasonic bath for 10 minutes. After repeating these processes 2 or 3 times, the coating materials (cetyltrimethylammonium-bromide, CTAB) around AuNPs become ultrathin and scarce. Then cluster formation happens randomly when the solution was placed statically for a few hours. Finally, the fresh gold-nanosphere solution was dropped upon ITO substrates, which were subsequently heated by a hot plate at 60 °C for fast evaporation. For the convenience of micrograph alignment between optical and SEM images, arrayed crosshair marks were printed on the ITO substrates at first by photolithography, metal deposition and lift-off process. The samples were further cleaned with O2 plasma to remove organic coats on the surface of nanospheres.

The AuNP dimers illustrated in Fig. 1(b) were fabricated by electron-beam lithography and standard lift-off processes. A thin layer of PMMA photoresist was spin-coated on the ITO substrate firstly. After pre-baking at 180 °C, the nano-hole patterns were defined by the focused electron beam and developed by the solution MIBK:IPA = 1:3 for 1 min. Then a 100-nm-thick gold film was deposited on the substrate, which was immersed in the acetone solution afterward for 1 day to remove the photoresist. The left Au nanospheres located in the nano-holes have a diameter of around 145 nm. The silver nanowires were purchased from Nanjing XFNano Materials Tech Co., Ltd., with a diameter of ∼100 nm and a length of ∼10 µm. The nanowire samples for image collection were fabricated following the same procedures for the AuNP sample preparation.

Appendix B: data acquisition

The optical micrographs of gold nanospheres and silver nanowires were collected by an optical microscope (Olympus BX51) mounted with a dark-field objective (Olympus LMPLFLN 100X, NA = 0.9)), and a high-performance gray-scale sCMOS camera (Andor Zyla 4.2). The light source used in the microscope is halogen lamp which emits broadband white light from 300 nm to 2000 nm but the visible sCMOS camera is only sensitive to the light wavelength of 400-900 nm. The optical path of the microscope is schematically shown in Fig. 1(a). The ground-truth SEM images of AuNPs were collected by a scanning electron microscope (Tescan Vega3) at 10,000-fold magnification. The SEM magnification is defined as the ratio of picture size to its real scope size. At this moderate magnification level, individual and clustered nanoparticles can be distinguished well. The field of view (FoV) of SEM micrographs of 2048 × 2048 pixels is about 55 × 55 µm2, indicating that one pixel equals to 27 nm in length. However, the FoV of optical images of 2048 × 2048 pixels are about 135 × 135 µm2 (∼66 nm per pixel), through measuring the scale rules (and alignment marks) pre-printed in the substrate. Therefore, the same area has a different number of pixels in optical and SEM images. We thus adjust the pixel length of optical images to match with that of SEM images by upscaled pixel interpolation.

The position alignment of optical and SEM image was based on the alignment marks, which are arrayed micro-disks with a hole at the midpoint. Since the alignment error is generally < ±2 pixels, the position and orientation of individual nanospheres can be aligned precisely. Then, the SEM images were cut into smaller sub-pictures of 400 × 400 pixels (FoV ∼ 10.8 × 10.8 µm2), as the ground truth. Correspondingly, optical images at the same scope were tailored as input for network training. Moreover, the contrast of images is then adjusted or linearly transformed to make the foreground and the background of images have the same average grayscale value. We also perform zero-padding around the boundaries of AuNPs before applying convolution, to keep the spatial size of all feature maps equal to that of the input. Finally, the paired and reformed optical and SEM micrographs were used for network training.

Funding

National Natural Science Foundation of China (62022001, 62005070); University Grants Committee (A-CityU101/20).

Acknowledgments

X. H. thank the Key Laboratory of Nanodevices of Jiangsu Province at SINANO, and the lab Nano-X at SINANO for technical support in the SEM and TEM characterization of nanoparticles. M. R. acknowledges support from the Royal Society and the Wolfson Foundation.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. E. Betzig, G. H. Patterson, R. Sougrat, et al., “Imaging Intracellular Fluorescent Proteins at Nanometer Resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

2. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

3. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]  

4. F. Ströhl and C. F. Kaminski, “Frontiers in structured illumination microscopy,” Optica 3(6), 667–677 (2016). [CrossRef]  

5. Y. Wu and H. Shroff, “Faster, sharper, and deeper: structured illumination microscopy for biological imaging,” Nat. Methods 15(12), 1011–1019 (2018). [CrossRef]  

6. N. Fang, H. Lee, C. Sun, et al., “Diffraction-Limited Optical Imaging with a Silver Superlens,” Science 308(5721), 534–537 (2005). [CrossRef]  

7. Z. Liu, H. Lee, Y. Xiong, et al., “Far-Field Optical Hyperlens Magnifying Sub-Diffraction-Limited Objects,” Science 315(5819), 1686 (2007). [CrossRef]  

8. X. Zhang and Z. Liu, “Superlenses to overcome the diffraction limit,” Nat. Mater. 7(6), 435–441 (2008). [CrossRef]  

9. Y. U. Lee, J. Zhao, Q. Ma, et al., “Metamaterial assisted illumination nanoscopy via random super-resolution speckles,” Nat. Commun. 12(1), 1559 (2021). [CrossRef]  

10. J.-B. Sibarita, “Deconvolution Microscopy,” in Microscopy Techniques: -/-, J. Rietdorf, ed. (Springer Berlin Heidelberg, Berlin, Heidelberg, 2005), pp. 201–243.

11. D. Mengu, M. S. Sakib Rahman, Y. Luo, et al., “At the intersection of optics and deep learning: statistical inference, computing, and inverse design,” Adv. Opt. Photonics 14(2), 209–290 (2022). [CrossRef]  

12. J. W. Goodman, Introduction to Fourier optics (Macmillan Learning, 2017).

13. A. Ashok, P. K. Baheti, and M. A. Neifeld, “Compressive imaging system design using task-specific information,” Appl. Opt. 47(25), 4457–4471 (2008). [CrossRef]  

14. A. C. Hansen and B. Adcock, “Compressed Sensing for Imaging,” in Compressive Imaging: Structure, Sampling, Learning, A. C. Hansen, ed., (Cambridge University Press, Cambridge, 2021), pp. 349–352.

15. E. Narimanov, “Resolution limit of label-free far-field microscopy,” Adv. Photonics 1(05), 056003 (2019). [CrossRef]  

16. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

17. C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019). [CrossRef]  

18. Z. Liu, L. Jin, J. Chen, et al., “A survey on applications of deep learning in microscopy image analysis,” Comput. Biol. Med. 134, 104523 (2021). [CrossRef]  

19. A. Sinha, J. Lee, S. Li, et al., “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

20. Y. Xue, S. Cheng, Y. Li, et al., “Reliable deep-learning-based phase imaging with uncertainty quantification,” Optica 6(5), 618–629 (2019). [CrossRef]  

21. D. Mengu and A. Ozcan, “All-Optical Phase Recovery: Diffractive Computing for Quantitative Phase Imaging,” Adv. Opt. Mater. 10(15), 2200281 (2022). [CrossRef]  

22. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, et al., “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015). [CrossRef]  

23. A. Goy, G. Rughoobur, S. Li, et al., “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Natl. Acad. Sci. 116(40), 19848–19856 (2019). [CrossRef]  

24. Y. Rivenson, Y. Zhang, H. Günaydın, et al., “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7, 17141 (2018). [CrossRef]  

25. Y. Wu, Y. Rivenson, Y. Zhang, et al., “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

26. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

27. K. Yanny, K. Monakhova, R. W. Shuai, et al., “Deep learning for fast spatially varying deconvolution,” Optica 9(1), 96–99 (2022). [CrossRef]  

28. L. Fang, F. Monroe, S. W. Novak, et al., “Deep learning-based point-scanning super-resolution imaging,” Nat. Methods 18(4), 406–416 (2021). [CrossRef]  

29. M. Deng, S. Li, Z. Zhang, et al., “On the interplay between physical and content priors in deep learning for computational imaging,” Opt. Express 28(16), 24152–24170 (2020). [CrossRef]  

30. B. Orazbayev and R. Fleury, “Far-Field Subwavelength Acoustic Imaging by Deep Learning,” Phys. Rev. X 10, 031029 (2020). [CrossRef]  

31. F. Wang, Y. Bian, H. Wang, et al., “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

32. P. Wijesinghe and K. Dholakia, “Emergent physics-informed design of deep learning for microscopy,” J. Phys. Photonics 3(2), 021003 (2021). [CrossRef]  

33. Z. Burns and Z. Liu, “Untrained, physics-informed neural networks for structured illumination microscopy,” Opt. Express 31(5), 8714–8724 (2023). [CrossRef]  

34. P. Si, N. Razmi, O. Nur, et al., “Gold nanomaterials for optical biosensing and bioimaging,” Nanoscale Adv. 3(10), 2679–2698 (2021). [CrossRef]  

35. A. Buades, B. Coll, and J. M. Morel, “A non-local algorithm for image denoising,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), (2005), 60–65 vol. 62.

36. K. He, X. Zhang, S. Ren, et al., “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in 2015 IEEE International Conference on Computer Vision (ICCV), (2015), 1026–1034.

37. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv, arXiv:1412.6980 (2015). [CrossRef]  

38. X. Hu, “Source Code for Optical Superresolution Microscopy,” figshare (2023), https://doi.org/10.6084/m9.figshare.24424552.

39. W. Yang, X. Zhang, Y. Tian, et al., “Deep Learning for Single Image Super-Resolution: A Brief Review,” IEEE Trans. Multimedia 21(12), 3106–3121 (2019). [CrossRef]  

40. S. Lefkimmiatis, “Non-local Color Image Denoising with Convolutional Neural Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 5882–5891.

41. V. Lempitsky, A. Vedaldi, and D. Ulyanov, “Deep Image Prior,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 9446–9454.

42. Q. Ruan, L. Shao, Y. Shu, et al., “Growth of Monodisperse Gold Nanospheres with Diameters from 20 nm to 220 nm and Their Core/Satellite Nanostructures,” Adv. Opt. Mater. 2(1), 65–73 (2014). [CrossRef]  

Supplementary Material (1)

NameDescription
Code 1       MATLAB source code for re-implementation

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) Schematic of a conventional dark-field microscope for optical image acquisition. (b) Blurry optical images of AuNP dimers with varied spacing distance, collected by a CMOS camera mounted in the microscope. The light source used in the optical microscope is a halogen lamp which emits broadband light from 300 nm to 2000nm. Here a 633 nm narrow-band filter is added during image acquisition. (c) The corresponding SEM images of dimers.
Fig. 2.
Fig. 2. CNN framework for deep-learning deconvolution microscopy. (a) The architecture of the proposed convolution neural network for image transformation and registration. (b) The input optical images of AuNPs with a field of view of 10.8 × 10.8 µm2, the inset is the high-resolution TEM images of the AuNPs. The scale bar represents 200 nm. (c) The output SEM-like super-resolution images of AuNPs reproduced by the CNN network.
Fig. 3.
Fig. 3. Super-resolution imaging of AuNP dimers reconstructed by deep-learning algorithms. (a1-d1) The input blurry optical images of AuNP dimers with varying inter-particle distances, acquired on a conventional dark-field microscope. (a2-d2) The corresponding deconvolution images reconstructed by the trained CNN network. (a3-d3) The SEM images of AuNP dimers used as ground truths. These images are renormalized or rescaled. (a4-d4) Comparison of light-intensity profiles along the dimer axis between the input (black), output (green) and ground truth images (red).
Fig. 4.
Fig. 4. Super-resolution imaging of randomly distributed AuNP multimers (N ≤ 4). (a1, b1, c1) The input wide-field optical images. (a2, b2, c2) The corresponding output deconvolution images and (a3, b3, c3) the ground-truth SEM images. (d) Zoomed-in regions of selected multimers bounded by green squares.
Fig. 5.
Fig. 5. The kaleidoscopes of 100-pair randomly selected AuNP multimers. (a1-d1) The output images from the trained CNN network for dimers (a1), trimers (b1), tetramers (c1), and more complicated AuNP clusters (d1). (a2-d2) The corresponding ground-truth SEM images.
Fig. 6.
Fig. 6. The accuracy rate of transforming the optical images of AuNP multimers by the trained CNN framework. (a) The counted distributions of the correlation coefficient for multimers with varied particle numbers. (b) The average correlation coefficient between the output images and ground-truth SEM images of AuNP multimers. (c) The proportion of AuNP multimers with different particle numbers.
Fig. 7.
Fig. 7. The CNN reconstruction of nanowire morphologies from their optical images. (a) The input optical image of silver nanowires for testing, collected under the same conditions as that of nanoparticle experiments. (b) The output testing micrograph of nanowires. (c) The SEM images of silver nanowires. The scale bar represents 10 µm. (d, e) The zoomed-in images outlined by blue and red dashed squares in (b). (f) The loss curve during the network training.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

R ( I 1 , I 2 ) = cov ( I 1 , I 2 ) / cov ( I 1 , I 1 ) cov ( I 2 , I 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.