Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Recovering higher dimensional image data using multiplexed structured illumination

Open Access Open Access

Abstract

Structured illumination (SI) using non-uniform intensity patterns is well-known for improving lateral resolution in microscopy. Here, we propose a multiplexed SI technique for recovering images with higher lateral resolution and with higher dimensional information at the same time. In this framework, we use unknown non-uniform intensity patterns for incoherent sample illumination and use the corresponding acquisitions for image recovery. In the first example, we use the reported framework to recover sample images with higher lateral resolution and separate different sections of the sample along the z-direction. In the second example, we recover the sample images with higher lateral resolution and separate the images at different spectral bands. The reported multiplexed-SI framework may find applications in general imaging settings where higher dimensional information is mixed in 2D image measurements. It can also be used in microscopy settings for computational sectioning and multispectral imaging.

© 2015 Optical Society of America

1. Introduction

Higher information content in images is desired in many application areas. However, typical images are in 2D and represent a mixture of higher dimensional data. Dedicated hardware is needed to separate the mixture and fit it into a higher dimensional data cube (such as 3D confocal imaging and multispectral imaging). We consider an example of fluorescence microscopy, where the emission from the sample is captured by a 2D image sensor. The captured 2D image represents a mixture of 2D data at different z-sections and a mixture of 2D data at different wavelengths. The information at different z-sections and at different spectral bands are considered higher dimensional data in this case.

Here, we explore a multiplexed framework for recovering sample images with higher lateral resolution and with higher dimensional information at the same time. The reported framework, termed multiplexed structure illumination (multiplexed-SI), builds upon the conventional structured illumination (SI) technique, where non-uniform intensity patterns are used for sample illumination and the corresponding acquisitions are used for image recovery [1, 2]. In a typical implementation of SI, sinusoidal patterns are used for modulating the high-frequency component into the passband of the objective lens. Therefore, the recorded images contain sample information that is beyond the resolution limit of the employed optics [1, 2]. Along the same line, speckle patterns have been used in SI for the same purpose. Resolution improvement has been demonstrated using different reconstruction methods, including phase retrieval, optimization, Bayes estimation and etc [3–13]. However, to the best of our knowledge, these different techniques are mainly targeted at resolution improvement and the acquired images have not been modeled as a mixture of higher dimensional data. Here, we propose a multiplexed framework that allows us to improve the lateral resolution and to recover higher dimensional data at the same time. The reported multiplexed-SI framework may find applications in general incoherent imaging settings where higher dimensional data is mixed in 2D image measurements.

2. Multiplexed structured illumination

The basic idea of the reported multiplexed-SI framework is shown in Fig. 1. Similar to the concept of conventional SI, we use unknown speckle patterns for sample illumination. The captured images are then used to recover sample images with higher lateral resolution and with higher dimensional information. Figure 1(a) shows the case of recovering different z sections of the sample and Fig. 1(b) shows the case of recovering images at different spectral bands. The forward imaging model of these two cases can be described as follows:

(In)=mOTFm(Iobj_mPmn)
where stands for Fourier transform, In stands for 2D image measurements, OTFm stands for the optical transfer function (OTF) of the objective lens (a known parameter in our implementation), Iobj_m stands for the ground-truth image of the sample, and Pmn stands for illumination patterns. In Eq. (1), the summation over subscript ‘m’ stands for the mixture of higher dimensional data. For example, we can model the captured images In as a summation of red, green, blue channels with m = 1, m = 2, and m = 3. The second example is to model the captured images as a summation of m different 2D sections along the z direction. In Eq. (1), we assume the no interaction between different incoherent modes. For each mode ‘m’, we have ‘n’ different intensity patterns for sample illumination, and thus, we have two subscripts for ‘Pmn’. In our implementation, we will translate the unknown illumination pattern to ‘n’ different spatial positions to get the corresponding 2D image measurements. As a result, we only have ‘m’ unknown illumination patterns. The goal here is to recover different modes of the object Iobj_m as well as the unknown illumination patterns Pmn (m = 1,2…) from the 2D image measurements In. If m = 1, Eq. (1) reduces to the forward imaging model of conventional SI [3, 4, 13].

 figure: Fig. 1

Fig. 1 Images corresponding to different illumination patterns are used to recover resolution-enhanced images at different z sections (a) and at different wavelengths (b).

Download Full Size | PDF

The recovery process is inspired and modified from the mode multiplexing and decomposition scheme in ptychography [14–16]. It starts with initial guesses of the different modes of the object Iobj_m and the unknown illumination pattern Pmn (m = 1,2…). We first define Ipm and Itm as follows:Ipm=Iobj_mPmn and (Itm)=OTFm(Ipm). Based on these definitions and the measurements In, we have the following updating procedures for mode m of the object and the unknown illumination pattern (the illumination pattern is different for different modes):

Itmupdate=ItmInmItm
(Ipmupdated)=(Ipm)+OTFm((Itmupdated)(Itm))
Iobj_mupdated=Iobj_m+Pmn(max(Pmn))2(IpmupdatedIpm)
Pmnupdated=Pmn+Iobj_mupdated(max(Iobj_mupdated))2(IpmupdatedIpm)
Equations (2)-(5) represent 4*m equations. The updating process will be repeated for all n measurements and the entire process is terminated until convergence, which can be measured by the difference between two successive recoveries. In a practical implementation, we can simply terminate it with a predefined loop number, typically 10-100. We can draw connections between the above procedures and the ptychography approach [14, 15]. The key part of ptychography algorithms is an operation called Fourier magnitude projection, where the magnitude of exit wave estimate is replaced by the square root of measured intensity and the phase is kept unchanged. In multi-state ptychography, the summation of all coherent state’s amplitude is used in the replacement process of Fourier magnitude projection. Here, in the case of incoherent imaging, we only consider intensity of the images, and we used Eq. (2) as an updating process that is similar to the Fourier magnitude projection in ptychography [14, 15]. The rest of the equations are the same as those reported in Ref [4].

We will validate the reported approach with two simulations. In the first simulation, we assume a two-layer object is separated by 6 microns and a 0.3 numerical aperture (NA) objective is used for image acquisition. We assume the NA of speckle pattern is 0.9 (can be generated by large-angle interference). We propagate the light field of the speckle to the two corresponding z-sections. We then multiply the intensity of the speckle patterns to the two object sections. We sum the resulting intensity from the two sections and low-pass filter it with the objective. Figure 2(a) shows the raw image under speckle illumination. We can see that, the raw image contains information of the two sections at different z positions (Fig. 2(b1) and 2(c1)). In this simulation, we translate the speckles to 220 different positions and generate the corresponding low-resolution images. The recovered images and speckles are shown in Fig. 2(b2)-(b3) and 2(c2)-(c3). We can see that, the reported framework is able to separate the two sections and improve the lateral resolution. In Fig. 2(d1), we use mean square error (MSE) to characterize the imaging performance as a function of different noise level. We can see that, the performance gradually degrades as the noise increases. In Fig. 2(d2) and 2(d3), we plotted the MSE as a function of pattern number (with a loop number of 75) and loop number (with a pattern number of 220). We note that, the sectioning effective of conventional SI technique is to recover one section of the 3D sample. The reported approach, on the other hand, is able recover multiple sections and improve lateral resolution at the same time.

 figure: Fig. 2

Fig. 2 Image recovery of different z sections using the multiplexed-SI scheme. (a) The low-resolution acquisition under unknown speckle illumination and its Fourier spectrum. (b1) and (c1): the input ground truth at two different z sections. (b2) and (c2): the recovered images using the multiplexed-SI. (b3) and (c3): the recovered speckles at two different z-sections. The MSE is plotted as a function as noise (d1), pattern number (d2), and loop number (d3).

Download Full Size | PDF

In the second simulation, we assume the object contains three different color channels (red, green, and blue), and the captured images represent a mixture of these three channels, as described by Eq. (1). Figure 3(a) shows the raw image under speckle illumination and the corresponding Fourier spectrum. We have added 1% noise in the raw images in this simulation. For the monochromatic raw image in Fig. 3(a), we cannot see any spectral information of the sample. We then translate the speckle patterns to 220 different positions and generate the corresponding mixtures similar to the first example. The recovered objects and speckles using the multiplexed-SI scheme are shown in Figs. 3(b) and 3(c). The recovered color combination and the ground truth of the three color channels are shown in Figs. 3(d) and 3(e).

 figure: Fig. 3

Fig. 3 Image recovery of different spectral bands using the multiplexed-SI scheme. (a) The low-resolution acquisition under unknown speckle illumination. Inset shows the corresponding Fourier spectrum. The recovered images (b) and speckles (c) using the multiplexed-SI with 50 loops. (d) The recovered color image by combining three channels. (e) The input ground truth.

Download Full Size | PDF

3. Experiments

We have performed two experiments to validate the reported imaging scheme. The first experiment aims to separate spectral bands using the proposed multiplexed-SI. As shown in Fig. 4(a), we used a video projector to project an unknown color speckle patterns and translate it to 114 positions. We used a monochromatic CCD camera to capture the corresponding images. Figure 4(b) shows a low-resolution monochromatic image of the color object (also refer to Visualization 1). Figures 4(c1)-(c3) shows the color channels of the object under uniform R/G/B illuminations. Figures 4(d1)-4(d3) show the recovered red, green, blue channels using the multiplexed-SI scheme. Figures 4(e1)-4(e3) show the recovered speckles. In Figs. 4(c4) and 4(d4), we combined the three channels and show the comparison between the color image under uniform illumination and the multiplexed-SI recovery. The high-resolution ground truth is shown in Fig. 4(f). The corresponding line traces are also shown for comparison in Fig. 4(g). Based on the dip-to-dip feature (~0.4 mm) highlighted in Fig. 4(g), the effective NA is ~0.00058 and it is ~1.7 times higher than the measured NA of the imaging system. We can see that, the multiplexed-SI scheme is able to recover the color image of the sample using monochromatic acquisitions and achieve resolution improvement.

 figure: Fig. 4

Fig. 4 Experimental validation of the multiplexed-SI scheme. (a) The experimental setup. A video projector is used to project translated unknown color speckles on the sample. (b) The raw monochromatic acquisition of the color object (Visualization 1). (c) The images under uniform illumination. The multiplexed-SI recovered images (d) and speckle patterns (e). We used 20 loops for recovery. (f) The ground truth of the object. (g) Lines traces of (c4), (d4), and (f).

Download Full Size | PDF

The second experiment aims to separate two different sections using the multiplexed-SI scheme. We used two pathology sections as the object and put this object close to a diffuser. The transmitted light from the diffuser forms speckles patterns on the two-layer object. Since the diffuser is placed closer to layer 1, the projected pattern on layer 1 is denser than that on layer 2. We then translated the object to 224 different positions and captured the corresponding image using a microscope system with two Nikon photographic lenses (with a NA of 0.005), as shown in Figs. 5(a) and 5(b) (also refer to Visualization 2). Figures 5(c1)-5(c3) show the uniform-illumined image, our recovery, and the ground truth of layer 1 respectively. Figure 5(d) shows the images of layer 2. The line traces of a small feature is shown in Fig. 5(e) for comparison. For Figs. 5(c1) and 5(d1), we removed the other layer to capture images of the single layer (layer 2 is in-focus and layer 1 is out-of-focus). We can see that, the proposed imaging scheme is able to recover information in z-direction and improve the resolution. We can also see that, the shadow from layer 1 can be seen from the layer-2 recovery in Fig. 5(d2). This effect is due to the fact that we does not model the interaction between different modes.

 figure: Fig. 5

Fig. 5 (a) Imaging setup, where a two-layer was used as the object. (b) The captured raw image with 0.005 NA, representing incoherent mixture of the two sections (Visualization 2). Uniform-illuminated, our recovery, ground truth of layer 1 (c1)-(c3) and layer 2 (d1)-(d3). We used 30 loops in the recovery process. (e) Line traces of small features in (c1)-(c3).

Download Full Size | PDF

4. Summary and discussion

In summary, we have discussed an imaging framework for recovering higher dimensional image data and improving lateral resolution at the same time. In the reported framework, unknown speckle patterns are used for incoherent sample illumination and the corresponding acquisitions are used for information recovery. The major contribution of this paper is to model the acquired images as an incoherent mixture of higher dimensional data. To the best of our knowledge, this is new to the structure illumination technique and may find broad applications in incoherent imaging settings where higher dimensional information is mixed in 2D image measurements.

There are several future directions of the reported multiplexed-SI framework. 1) In the reported framework, we did not model the interaction between different modes in the mixture. In other words, we assume different modes are independent of each other. This assumption is valid for information at different spectral bands. For information at different z sections, this assumption is only valid for transparent sample, where emission from one section is independent of other sections. If we can model the interaction between different modes [17], we may be able to extend the reported scheme to handle diffused samples. 2) The relationship between the number of raw image acquisitions and the number of modes we can model in the mixture is currently unknown. This relationship may depend on the information redundancy of different modes. Further research is needed. 3) In the reported framework, we assume the optical transfer function for different imaging modes is known. We can also add one updating step to refine the OTF in the iterative process, similar to Eqs. (4) and (5). Updating OTF in the iterative process may be useful to handle unknown sample-induced aberrations.

Acknowledgment

This work was supported by National Science Foundation (NSF) CBET 1510077.

References and links

1. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]   [PubMed]  

2. M. G. Gustafsson, L. Shao, P. M. Carlton, C. J. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008). [CrossRef]   [PubMed]  

3. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6(5), 312–315 (2012). [CrossRef]  

4. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22(17), 20856–20870 (2014). [CrossRef]   [PubMed]  

5. H. Yilmaz, E. G. van Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Speckle correlation resolution enhancement of wide-field fluorescence imaging,” Optica 2(5), 424–429 (2015). [CrossRef]  

6. J. Min, J. Jang, D. Keum, S.-W. Ryu, C. Choi, K.-H. Jeong, and J. C. Ye, “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Sci. Rep. 3, 2075 (2013). [CrossRef]   [PubMed]  

7. O. Wagner, A. Schwarz, A. Shemer, C. Ferreira, J. García, and Z. Zalevsky, “Superresolved imaging based on wavelength multiplexing of projected unknown speckle patterns,” Appl. Opt. 54(13), D51–D60 (2015). [CrossRef]  

8. S. Dong, P. Nanda, K. Guo, J. Liao, and G. Zheng, “Incoherent Fourier ptychographic photography using structured light,” Photon. Res. 3(1), 19–23 (2015). [CrossRef]  

9. C. B. Müller and J. Enderlein, “Image Scanning Microscopy,” Phys. Rev. Lett. 104(19), 198101 (2010). [CrossRef]   [PubMed]  

10. G. P. J. Laporte, N. Stasio, C. J. R. Sheppard, and D. Psaltis, “Resolution enhancement in nonlinear scanning microscopy through post-detection digital computation,” Optica 1(6), 455–460 (2014). [CrossRef]  

11. I. J. Cox, C. J. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik (Stuttg.) 60, 391–396 (1982).

12. A. Jost, E. Tolstik, P. Feldmann, K. Wicker, A. Sentenac, and R. Heintzmann, “Optical Sectioning and High Resolution in Single-Slice Structured Illumination Microscopy by Thick Slice Blind-SIM Reconstruction,” PLoS One 10(7), e0132174 (2015). [CrossRef]   [PubMed]  

13. R. Ayuk, H. Giovannini, A. Jost, E. Mudry, J. Girard, T. Mangeat, N. Sandeau, R. Heintzmann, K. Wicker, K. Belkebir, and A. Sentenac, “Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm,” Opt. Lett. 38(22), 4723–4726 (2013). [CrossRef]   [PubMed]  

14. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494(7435), 68–71 (2013). [CrossRef]   [PubMed]  

15. D. J. Batey, D. Claus, and J. M. Rodenburg, “Information multiplexing in ptychography,” Ultramicroscopy 138, 13–21 (2014). [CrossRef]   [PubMed]  

16. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757–1767 (2014). [CrossRef]   [PubMed]  

17. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29(8), 1606–1614 (2012). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1: MP4 (1007 KB)      Visualization 1
Visualization 2: MP4 (2780 KB)      Visualization 2

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Images corresponding to different illumination patterns are used to recover resolution-enhanced images at different z sections (a) and at different wavelengths (b).
Fig. 2
Fig. 2 Image recovery of different z sections using the multiplexed-SI scheme. (a) The low-resolution acquisition under unknown speckle illumination and its Fourier spectrum. (b1) and (c1): the input ground truth at two different z sections. (b2) and (c2): the recovered images using the multiplexed-SI. (b3) and (c3): the recovered speckles at two different z-sections. The MSE is plotted as a function as noise (d1), pattern number (d2), and loop number (d3).
Fig. 3
Fig. 3 Image recovery of different spectral bands using the multiplexed-SI scheme. (a) The low-resolution acquisition under unknown speckle illumination. Inset shows the corresponding Fourier spectrum. The recovered images (b) and speckles (c) using the multiplexed-SI with 50 loops. (d) The recovered color image by combining three channels. (e) The input ground truth.
Fig. 4
Fig. 4 Experimental validation of the multiplexed-SI scheme. (a) The experimental setup. A video projector is used to project translated unknown color speckles on the sample. (b) The raw monochromatic acquisition of the color object (Visualization 1). (c) The images under uniform illumination. The multiplexed-SI recovered images (d) and speckle patterns (e). We used 20 loops for recovery. (f) The ground truth of the object. (g) Lines traces of (c4), (d4), and (f).
Fig. 5
Fig. 5 (a) Imaging setup, where a two-layer was used as the object. (b) The captured raw image with 0.005 NA, representing incoherent mixture of the two sections (Visualization 2). Uniform-illuminated, our recovery, ground truth of layer 1 (c1)-(c3) and layer 2 (d1)-(d3). We used 30 loops in the recovery process. (e) Line traces of small features in (c1)-(c3).

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

( I n )= m OT F m ( I obj_m P mn )
I tm update = I tm I n m I tm
( I pm updated )=( I pm )+OT F m ( ( I tm updated )( I tm ) )
I obj_m updated = I obj_m + P mn (max( P mn )) 2 ( I pm updated I pm )
P mn updated = P mn + I obj_m updated (max( I obj_m updated )) 2 ( I pm updated I pm )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.