Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fly-scan high-throughput coded ptychographic microscopy via active micro-vibration and rolling-shutter distortion correction

Open Access Open Access

Abstract

Recent advancements in ptychography have demonstrated the potential of coded ptychography (CP) for high-resolution optical imaging in a lensless configuration. However, CP suffers imaging throughput limitations due to scanning inefficiencies. To address this, we propose what we believe is a novel ‘fly-scan’ scanning strategy utilizing two eccentric rotating mass (ERM) vibration motors for high-throughput coded ptychographic microscopy. The intrinsic continuity of the ‘fly-scan’ technique effectively eliminates the scanning overhead typically encountered during data acquisition. Additionally, its randomized scanning trajectory considerably reduces periodic artifacts in image reconstruction. We also developed what we believe to be a novel rolling-shutter distortion correction algorithm to fix the rolling-shutter effects. We built up a low-cost, DIY-made prototype platform and validated our approach with various samples including a resolution target, a quantitative phase target, a thick potato sample and biospecimens. The reported platform may offer a cost-effective and turnkey solution for high-throughput bio-imaging.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ptychography was originally developed to address the phase problem encountered in electron crystallography [1]. This technique has evolved significantly with the adoption of an iterative phase retrieval framework for ptychographic reconstruction [2,3]. In this experimental procedure, the object is laterally translated across a spatially-confined probe beam in real space, and the corresponding diffraction patterns are acquired in reciprocal space. The confined probe beam imposes a spatial constraint in real space, thereby delineating the physical boundaries of the object for each measurement. Concurrently, the acquired diffraction measurements impose Fourier magnitude constraints in reciprocal space, which are enforced for the estimated solution. This method needs to ensure sufficient overlap between adjacent scan positions, utilizing the redundantly captured diffraction patterns for the reconstruction of both the probe and the extended object through ptychographic phase retrieval. Ptychography has rapidly developed over the last decade and attracted significant attention from various research communities [411]. However, due to its relatively low imaging resolution and throughput, the applications of ptychography are limited in the visible light regime. Recently, a novel development termed coded ptychography (CP) has been demonstrated its potential for high-resolution optical imaging in a lensless configuration [12,13]. This new modality of ptychography has attracted lots of interest from biomedical researchers. In a typical implementation of CP, a dense and thin coded layer is directly coated on the surface of image sensor [14], forming the coded image sensor. Unlike a spatially-confined probe beam, CP uses a fiber-coupled laser beam to illuminate the entire object over an extended area. By translating the object or the integrated coded sensor to the different lateral positions, a set of corresponding diffraction intensity patterns are recorded for reconstruction. The coded surface can be treated as an unconfined computational scattering lens, converting the large diffraction angles of the light waves into smaller angles that can be detected by the pixel array. As a result, previously inaccessible high-resolution object details can now be acquired by the coded image sensor. CP has been demonstrated in biomedical imaging applications, including digital pathology [15], high-throughput cytometric analysis [16] and screening, and large-scale yeast cell culture monitoring [17], among others [1823].

For the conventional ptychography, it is typically performed in a step-scan mode. Before data acquisition, the detector needs to wait until the object has been translated and settled at a pre-defined position, resulting in significant dead time that slows down data acquisition. To overcome the scanning overhead problem, one potential solution is the continuous or ‘fly-scan’ ptychography method [2427], where ‘fly-scan’ means data can be acquired when the sample is in continuous motion. Continuously moving sample results in blurred diffraction patterns that cause degraded image reconstruction quality. A coherent mode decomposition model is then used to solve this issue. However, the reconstruction process presents two challenges: 1) it necessitates a uniform velocity of the sample's motion to prevent discrepancies in data collection; 2) the use of a straight-line-scan mode may introduce periodic artifacts in the reconstructed image. It is worth noting that a spiral-path-based fly-scan ptychography has been proposed to avoid the above limitations [28], but the requirement of precise mechanical actuators prevent its adoption in the development of miniaturized and low-cost on-chip microscopy. For the CP, benefits from the effective positional tracking algorithm, eliminating the need of feedback from precise mechanical actuators [29,30]. In the typical implementation of CP, the specimen or the coded sensor is translated via a low-cost step-motor using the step-scan mode or the straight-line scan mode. The actuator breaking and acceleration at the end of each straight-line scan path can lead to additional scanning overhead, posing obstacles in achieving high-throughput imaging.

In this work, we report a novel ‘fly-scan’ scanning strategy for high-throughput coded ptychographic microscopy. Two low-cost eccentric rotating mass (ERM) vibration motors are used to replace the conventional precise mechanical actuators or the step-motor, producing active micro-vibration to continuously introduce the random positional shifts to the object or the coded image sensor. With such a ‘fly-scan’ scanning strategy, there is no breaking or acceleration during the data acquisition process, achieving a completely continuous scanning mode. The scanning overhead is completely eliminated, thereby increasing the imaging throughput. The randomized scanning trajectory avoid periodic artifacts in the reconstruction. Meanwhile, we develop a robust rolling-shutter distortion correction algorithm to overcome the challenge posed by the rolling-shutter effects. We design a 3D-printed translational stage based on the ERM vibration motors, enabling us to build up a low-cost, DIY-made microscopy platform. To validate its imaging performance, several experiments are carried out with different kinds of biospecimens. The reported platform may provide a cost-effective and turnkey solution for high-throughput bio-imaging.

2. Fly-scan coded ptychographic microscopy via active micro-vibration

Figure 1(a) shows the schematic of the reported fly-scan coded ptychographic microscopy, where a fiber-coupled 532 nm laser is used for illumination. The power of the laser is ∼50 mw, which provides high optical flux to reduce the exposure time during data acquisition. The light wave passes through the sample, then it is modulated by the coded layer and detected by the image sensor. Nowadays, most CMOS sensors use rolling shutter due to their less noise, wider dynamic range and less heat compared to global shutter. In our implementation, we select a rolling-shutter camera with a resolution of 3000 × 4000 and a pixel size of 1.85µm (The Imaging Source, DMM 37UX226). The coded layer consists of a thin and dense layer of microbeads with an average diameter of 1-5 µm, directly coated on top of the image sensor to form the coded rolling-shutter image sensor, as shown in Fig. 1(b). To translate the sample or the image sensor, we use two ERM vibration motors (BestTong, DC 1.5 V, A0000073) to generate active micro-vibration. The ERM vibration motor is a kind of DC motor with an offset mass attached to the shaft. When the ERM rotates, the centripetal force of the offset mass creates a net centrifugal force, causing motor displacement. The use of ERM vibration motors offers three aspects of advantages:1) Low-cost. Each ERM vibration motor costs only $\$$2, which is at least two orders of magnitude cheaper than translation stages used in conventional ptychography or CP. 2) No dead time. It can produce completely continuous motion without any stops in actuating process. 3) No periodic artifacts. The random scanning shifts produced by the ERM motors avoid periodic artifacts in ptychographic reconstruction. We design a 3D-printed module as the sample stage or the sensor stage, consisting of two sets of parallel-beam flexures to generate motion in the x-y plane, as shown in Figs. 1(b)–1(c). Each ERM motor is used for each set of the flexure to actuate the stage in its corresponding direction. This design enables easy x-y translational motion and the unwanted axial or rotational motion can also be constrained. Figure 1(d) shows the reported prototype platform, where the coded rolling-shutter image sensor is fixed on the 3D-printed holder and a pathology slide is fixed on the sample stage (Fig. 1(c)). The coded rolling-shutter image sensor is placed under the sample with a distance ${d_1}$ ∼500 µm. To operate this system, we simply drive the ERM motor by connecting it to a constant voltage DC source at the motor’s rated voltage, without the need of extra driver board. A constant voltage drives the motor at a constant speed, frequency and vibration amplitude until the power supply is switched off. The sample stage is continuously actuated, generating random shifts in the x-y plane. The diffraction patterns are subsequently acquired by coded rolling-shutter image sensor operated at its maximum framerate. No overhead problem is encountered, and one can also refer to Visualization 1 to demonstrate its operation.

 figure: Fig. 1.

Fig. 1. Fly-scan coded ptychographic microscopy via active micro-vibration. (a) Schematic of the reported platform, where a fiber-coupled 532 nm laser is used for sample illumination. The coded rolling-shutter image sensor is placed under the sample with a distance ${d_1}$ ∼500 µm. The distance between the coded layer and the detection plane of the image sensor is ∼840 µm (${d_2}$). (b) A thin and dense layer of microbeads is directly coated on top of the image sensor. The clear region of the coded layer is used for positional tracking of the sample or the coded sensor. (c) Two ERM vibrational motors are used to produce active micro-vibration to the sample stage or the sensor stage. Each ERM motor is used for each set of the flexure to actuate the stage in its corresponding direction. (d) The prototype platform. Refer to Visualization 1 for a demonstration of its operation. (e) The reconstruction procedure of the reported schema. First, the captured distorted raw images are corrected via the rolling-shutter distortion correction algorithm. Then, a conventional ptychographic phase retrieval is performed for the high-resolution complex-valued object reconstruction.

Download Full Size | PDF

High-throughput microscopy plays a vital role in numerous biomedical applications [31,32]. The imaging throughput is primarily dependent on imaging speed which in our implementation is enhanced through two strategic integrations. First, the use of a high-powered laser restricts exposure time to sub-millisecond level, rendering it insignificant in the data acquisition. Second, the adoption of ERM vibration motors eliminates scanning overhead, reducing scanning time and increasing imaging speed. It is important to note that rolling-shutter cameras generally have higher frame rates, making them widely used in high-throughput imaging applications. However, the fly-scan strategy results in a rolling-shutter effect that distorts the captured raw images. To address this issue, we develop a rolling-shutter distortion correction algorithm. In next section, we will introduce the imaging model and the rolling-shutter distortion correction algorithm in detail.

3. Ptychographic reconstruction with a rolling-shutter distortion correction

3.1 Forward coded ptychographic imaging model

In the data acquisition process of CP, the object $O({x,y} )$ (or the coded rolling-shutter image sensor) is randomly translated to different positions $({x_i},{y_i})$s in x-y plane and the corresponding diffraction measurements are acquired for reconstruction. The forward imaging model of the reported system can be written as:

$$I_i^{distort}({x,y} )= {|{[{({O({x - {x_i},y - {y_i}} )\mathrm{\ast }ps{f_{d1}}} )\cdot cl({x,y} )} ]\mathrm{\ast }ps{f_{d2}}} |^2}, $$
where $I_i^{distort}({x,y} )$ denotes the captured distorted raw images, $O({x,y} )$ is the object, $cl({x,y} )$ denotes the coded layer profile, $ps{f_{d1}}$ and $ps{f_{d2}}$ present the free-space propagation kernel for the distance of ${d_1}$ and ${d_2}$, ‘*’ represents the convolution operation and‘·’ stands for point-wise multiplication. We aim to recover the high-resolution complex-valued object from the measurements $I_i^{distort}({x,y} )$. Conceptually, the reconstruction procedure of the reported schema is shown in Fig. 1(e). The distorted raw images are firstly captured and are corrected subsequently via the rolling-shutter distortion correction algorithm. A conventional ptychographic phase retrieval is then performed for the high-resolution complex-valued object reconstruction [12,13].

3.2 Rolling-shutter distortion correction algorithm

In industrial camera systems utilizing rolling-shutter modes, the exposure process is set up in a sequential manner where pixel rows are exposed successively, each with a distinct temporal offset from its predecessor. This approach implies a staggered exposure across the sensor array, where each pixel row undergoes individualized scanning, rather than a simultaneous and unified exposure. Consequently, this mechanism can potentially incur image distortions when the motion velocity of an object supersedes the exposure and readout time capabilities of the image sensor. As illustrated in the framework depicted in Fig. 1(e), the vibrational velocity induced by the ERM motor surpass the capture velocity of the rolling-shutter image sensor, resulting in noticeable distortions in the acquired raw images. For the CP, the high-quality reconstruction relies on the precise tracking of the object’s positional shifts $({x_i},{y_i})$s. Conventionally, one can crop any clear region of raw images and perform cross-correlation analysis for motion tracking. In normal cases, the calculated positional shifts remain invariant, irrespective of the spatial location is top, middle, or bottom of the selected cropping area. However, due to the rolling-shutter effect, the calculated positional shifts exhibit dependence on the specific cropping region selected. This signifies that if we use the middle region to calculate the object’s positional shifts for full-field reconstruction, only the reconstruction of the middle region will be clear, while the reconstruction of the top or bottom region will yield blurred results. The rolling-shutter distortion correction algorithm proposed in this work proficiently corrected the distortions present in the raw images. With these corrected raw images, we anticipate achieving a high-quality full-field reconstruction based on the object's positional shifts calculated from any clear region.

The essence of the rolling-shutter distortion correction algorithm is the coordinate transformation. From this point-of-view, the distortion manifests as an aberration in the ideal coordinate of the captured image. Accordingly, the primary strategy is to calculate the distorted coordinate initially and then conduct a two-dimensional interpolation operation that integrates the distorted raw image and the calculated distorted coordinate to yield distortion-corrected raw image. The process of the rolling-shutter distortion correction algorithm can be summarized as follows: 1) (Line 1-5) Segment each raw image into K segments along the horizontal direction (assuming that K is odd, so the number of the middle segment is (K + 1)/2)); Perform cross-correlation to analyze the positional shifts of each segment of distorted raw images. 2) (Line 6-8) Calculate the pixel difference of the positional shifts between each segment and the middle segment. 3) (Line 9-11) Interpolate the calculated pixel difference via a curve-fitting approach to obtain the pixel difference of positional shifts in each row of the distorted raw images. 4) (Line 12-14) Create an ideal coordinate; Calculate the distorted coordinate based on the ideal coordinate and the pixel difference of positional shifts in each row of the distorted raw images. 5) (Line 15-17) Correct the distorted raw images by performing a 2D interpolation based on the distorted raw images and the distorted coordinate. Figure 2 shows the outline of the rolling-shutter distortion correction algorithm. We also open source the related MATLAB code as a supplementary, as shown in Appendix A.

 figure: Fig. 2.

Fig. 2. Outline of the rolling-shutter distortion correction algorithm.

Download Full Size | PDF

To verify the developed rolling-shutter distortion correction algorithm, we chose a cancer tissue slide as a sample and performed a validation experiment on the reported microscopy platform (Fig. 1(d)). We captured 1500 raw images with a total acquisition time ∼50 seconds. Then, the positional shifts were calculated by three different segments cropped from top, middle and bottom area of the clear region of the distorted raw images. The scanning range is about ±20 µm with an average scanning distance of 5.75 µm. As shown in Fig. 3(a), in the presence of rolling-shutter effect, the calculated random scanning positions of the object differ from each other. The different colors indicate segments that are cropped from different areas. We then corrected the distorted raw images using the above distortion correction algorithm and calculated the scanning positions of three different segments. Figure 3(b) shows that the scanning positions of three different segments are almost the same. The results indicate that the calculated object’s positional shifts are independent of different segments after rolling-shutter distortion correction. Furthermore, we performed the ptychographic reconstruction with the distorted raw images and the corrected raw images, respectively. Figure 4(a) shows a recovered large field-of-view amplitude image. Figures 4(b1)-(d1) present three different magnified views of regions in Fig. 4(a). The recovered images from top (Fig. 4(b1)) and bottom (Fig. 4(d1)) regions become blurred. As a comparison, Figs. 4(b2)-(d2) show the clear reconstruction after distortion correction. Figure 4(e) shows the corresponding large field-of-view reconstruction. Obviously, the image quality has improved compared to the Fig. 4(a).

 figure: Fig. 3.

Fig. 3. The calculated random scanning positions without (a) and with (b) rolling-shutter distortion correction. The different colors indicate different segments: blue, black and red mean top, middle and bottom, respectively.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Ptychographic reconstruction without and with rolling-shutter distortion correction. (a) Recovered large field-of-view amplitude image with distorted raw images. (b1)-(d1) Three zoomed-in views of (a). (b2)-(d2) Three zoomed-in views of (e). (e) Recovered large field-of-view amplitude image with corrected raw images.

Download Full Size | PDF

4. Imaging performance of the reported prototype platform

4.1 Resolution performance and quantitative phase imaging

We first validate the reported ‘fly-scan’ coded ptychographic microscopy platform using a USAF resolution target (2015aUSAF) in Fig. 5. In this experiment, the processing time for 1500 raw images with 1024 by 1024 pixels is ∼3 minute for 5 iterations using a Dell Precision T3660 desktop computer. Figure 5(a) shows the captured raw image of the resolution target, and Fig. 5(b) shows the reconstruction, where we can resolve the 0.62 µm linewidth from group 9, element 5. The normalized intensity profiles of the red and blue lines shown in Fig. 5(b2) are presented in Fig. 5(c). To further improve the resolution, one can model the pixel PSF and the angular transfer function in the forward imaging model. In addition, the current achievable is limited by the feature size (average diameter ∼2 µm) of the coded layer. In our latest work [15], we demonstrated that using the goat blood cells as the coded layer that can help improve resolution due to its smaller features. Using an image sensor with a smaller pixel size can furtherly improve the resolution.

 figure: Fig. 5.

Fig. 5. Validation using a USAF resolution target. (a) Captured raw image of the resolution target. (b) Reconstruction for the resolution target. We can resolve 0.62 µm half-pitch linewidth in group 9, element 5. (c) The normalized intensity profiles of the red and blue lines (group 9, element 5) shown in (b2).

Download Full Size | PDF

In the second experiment, we validate the quantitative imaging nature of the reported platform using a quantitative phase target (Benchmark QPT). Figure 6(a1) shows the captured raw image of the phase target and Fig. 6(a2) shows the recovered quantitative phase. The line profile across the red dash circle in Fig. 6(a2) is plotted in Fig. 6(b). The recovered phase is in a good agreement with the ground-truth height of the phase target, validating the quantitative imaging nature of the reported platform. We also test the phase imaging performance using an unstained mouse kidney slide. The Fig. 6(c1) and Fig. 6(c2) show the captured raw image and the recovered phase profile, respectively. The quantitative phase imaging capability offers a label-free solution for the biology-related applications.

 figure: Fig. 6.

Fig. 6. Validation for quantitative phase imaging. Captured raw image (a1) and recovered quantitative phase (a2) of the phase target. (b) The line trace of the red dash circle in (a2). The ground-truth radian is ∼1.53. Captured raw image (c1) and recovered phase profile (c2) of the unstained mouse kidney slide.

Download Full Size | PDF

4.2 3D digital refocusing of a thick specimen

The reported fly-scan coded ptychographic imaging approach can enable a digital refocusing of a thick specimen. Once the exit wavefront of the object is recovered, we can digitally propagate the recovered complex wavefront to any plane along the optical axis to obtain a 3D section image. In this experiment, we validate this point using a thick potato sample. Figure 7(a) shows the recovered amplitude of the object’s exit wavefront and Figs. 7(b1)-(b4) show the recovered amplitude after digitally propagating to z = 758 µm, 782 µm, 797 µm, and 810 µm. We can see the focused images of different organelles and cell walls at different axial planes.

 figure: Fig. 7.

Fig. 7. Testing the 3D digital refocusing capability of the reported platform using a thick potato sample. (a) The recovered amplitude of the object’s exit wavefront. (b) The four images after digitally propagating to four different axial positions. Refer to Visualization 2 for a demonstration of digital propagation process.

Download Full Size | PDF

4.3 High-resolution lensless imaging over a large field-of-view

It is important to achieve both high-resolution and large field-of-view at the same time in many biomedical applications. The reported microscopy platform is based on the lensless modality, which presents unique advantages. Primarily, the unit magnification configuration allows us to have the entire sensor area. Furthermore, the coded ptychographic phase retrieval enables a high-resolution reconstruction of the object. The prior imaging performance of the large field-of-view and high-resolution breaks the tradeoff between field-of-view and spatial resolution that exists in traditional optical microscopes. Figure 8(a) shows the large field-of-view and high-resolution reconstruction of a blood smear sample. The imaging area is 5.5 mm by 5.5 mm. Figures 8(b)–8(d) show the magnified recovered amplitude and phase of three highlighted regions in Fig. 8(a). It is worth noting that the top (Fig. 8(b)) and bottom (Fig. 8(d)) areas are free of distortion, benefiting from the use of rolling-shutter distortion correction algorithm in the large field-of-view ptychographic reconstruction.

 figure: Fig. 8.

Fig. 8. The large field-of-view and high-resolution reconstruction of a blood smear sample. (a) The large field-of-view image of the recovered object amplitude. (b)-(d) Magnified views of the recovered amplitude and phase images of regions b-d.

Download Full Size | PDF

In the last experiment, we test the reported platform using a brain tumor section. This brain tumor section is labeled with the Ki-67 biomarker, a proliferation-associated nuclear protein that is only detected in dividing cells. Figure 9(a) shows the recovered large field-of-view phase image of the brain section, covering an imaging area of 6.4 mm by 5.5 mm. The magnified view of Fig. 9(a) is shown in Fig. 9(b). Figures 9(c1)-(c2) show the zoomed-in views of the phase images of regions c1 and c2 in Fig. 9(b) and the corresponding intensity images are shown in Figs. 9(d1)-(d2). With the recovered images, we can perform cytometric analysis of Ki-67 positive and negative cells. In our implementation, both the recovered intensity image and phase image are used to segmentation. The specific segmentation strategy can refer to our latest work [33]. Figures 9(e1)-(e2) show the segmented masks of positive and negative cells in selected regions. As a comparison, the ground-truth intensity images captured using a 20×/0.5 NA objective lens are shown in Figs. 9(f1)-(f2). The average counting difference between the recovered image and the objective lens is ∼2.1%.

 figure: Fig. 9.

Fig. 9. The large field-of-view and high-resolution reconstruction of a brain tumor section. (a) Large field-of-view phase image. (b) Zoomed-in view of (a). (c1-c2) Recovered phase. (d1-d2) Recovered intensity. (e1-e2) Cell segmentation masks. (f1-f2) Ground-truth images captured using a 20×/0.5 NA objective.

Download Full Size | PDF

5. Discussion and conclusion

In summary, we report a fly-scan scanning strategy for high-throughput lensless coded ptychographic imaging. A low-cost, simple and 3D-printed translational stage is designed to perform the random scanning process. We demonstrate a half-pitch resolution of 0.62 µm over a field-of-view of 5.5 mm by 6.4 mm. A quantitative phase target and various biological samples are used to test the imaging performance. We also demonstrate a 3D digital refocusing capability using a thick potato where we can perform refocusing after the data has been acquired.

The unique advantages of the reported platform can be summarized as follows: 1) In our implementation, we use two ERM vibrational motors and a 3D-printed holder to build up the translational stage, presenting a marked improvement over conventional precise mechanical actuators or step-motors. This innovative combination not only reduces size but also minimizes costs, thereby benefiting the development of cost-effective, compact and field-portable on-chip microscopy platform. 2) The fly-scan random scanning strategy offers a turkey solution for other high-throughput scanning imaging techniques. It is a completely continuous scanning approach without any overhead accumulations and motion breaking. The random path of positional shifts can also help to remove periodic artifacts in the reconstruction. 3) The implementation of our developed rolling-shutter distortion correction algorithm empowers the reported platform to perform high-resolution and large field-of-view imaging, even amidst the rolling-shutter effect challenges. This development potentially paves the way for imaging fast-moving objects or monitoring dynamic biological events utilizing a limited-frame-rate image sensor.

At present, ultra-high space-time bandwidth product microscopic imaging is one of the unsolved fundamental scientific problems. In current implementation, it requires at least 450 raw images to recover a good result, which corresponds to an acquisition time of ∼15 seconds. Monitoring live cells in real-time is challenge to the current configuration. Effort along this direction is ongoing.

Appendix A: MATLAB code of the rolling-shutter distortion correction algorithm

The following MATLAB code consists of seven steps:1) Set the parameters. 2) Perform cross-correlation analysis for each segment’s position tracking. 3) Calculate the pixel difference of the positional shifts between each segment and the middle segment. 4) Interpolate the calculated pixel difference via a curve-fitting approach. 5) Create an ideal coordinate. 6) Calculate the distorted coordinate. 7) Perform 2D interpolation to obtain the corrected raw images.

oe-32-6-8778-i001

oe-32-6-8778-i002

Funding

Proof of Concept Foundation of Xidian University Hangzhou Institute of Technology (GNYZ2023YL0406); Postdoctoral Fellowship Program of CPSF (GZC20232025); China Postdoctoral Science Foundation (CPSF), (2023M732731); National Natural Science Foundation of China (62305258).

Disclosures

The authors declare no conflicts of interest.

Data availability

The data presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Hoppe, “Diffraction in inhomogeneous primary wave fields. 1. Principle of phase determination from electron diffraction interference,” Acta Cryst A 25(4), 495–501 (1969). [CrossRef]  

2. H. M. L. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93(2), 023903 (2004). [CrossRef]  

3. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]  

4. M. Dierolf, A. Menzel, P. Thibault, et al., “Ptychographic X-ray computed tomography at the nanoscale,” Nature 467(7314), 436–439 (2010). [CrossRef]  

5. D. F. Gardner, M. Tanksalvala, E. R. Shanblatt, et al., “Subwavelength coherent imaging of periodic samples using a 13.5 nm tabletop high-harmonic light source,” Nat. Photonics 11(4), 259–263 (2017). [CrossRef]  

6. Y. Jiang, Z. Chen, Y. Han, et al., “Electron ptychography of 2D materials to deep sub-ångström resolution,” Nature 559(7714), 343–349 (2018). [CrossRef]  

7. A. M. Maiden, M. J. Humphry, F. Zhang, et al., “Superresolution imaging via ptychography,” J. Opt. Soc. Am. A 28(4), 604–612 (2011). [CrossRef]  

8. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

9. M. Stockmar, P. Cloetens, I. Zanette, et al., “Near-field ptychography: phase retrieval for inline holography using a structured illumination,” Sci. Rep. 3(1), 1927 (2013). [CrossRef]  

10. P. Thibault, M. Dierolf, O. Bunk, et al., “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

11. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494(7435), 68–71 (2013). [CrossRef]  

12. S. Jiang, P. Song, T. Wang, et al., “Spatial- and Fourier-domain ptychography for high-throughput bio-imaging,” Nat. Protoc. 18(7), 2051–2083 (2023). [CrossRef]  

13. T. Wang, S. Jiang, P. Song, et al., “Optical ptychography for biomedical imaging: recent progress and future directions [Invited],” Biomed. Opt. Express 14(2), 489–532 (2023). [CrossRef]  

14. C. Guo, S. Jiang, P. Song, et al., “Quantitative multi-height phase retrieval via a coded image sensor,” Biomed. Opt. Express 12(11), 7173–7184 (2021). [CrossRef]  

15. S. Jiang, C. Guo, P. Song, et al., “High-throughput digital pathology via a handheld, multiplexed, and AI-powered ptychographic whole slide scanner,” Lab Chip 22(14), 2657–2670 (2022). [CrossRef]  

16. S. Jiang, C. Guo, P. Song, et al., “Resolution-enhanced parallel coded ptychography for high-throughput optical imaging,” ACS Photonics 8(11), 3261–3271 (2021). [CrossRef]  

17. S. Jiang, C. Guo, Z. Bian, et al., “Ptychographic sensor for large-scale lensless microbial monitoring with high spatiotemporal resolution,” Biosens. Bioelectron. 196, 113699 (2022). [CrossRef]  

18. P. Song, S. Jiang, T. Wang, et al., “Synthetic aperture ptychography: coded sensor translation for joint spatial-Fourier bandwidth expansion,” Photonics Res. 10(7), 1624–1632 (2022). [CrossRef]  

19. P. Song, S. Jiang, H. Zhang, et al., “Super-resolution microscopy via ptychographic structured modulation of a diffuser,” Opt. Lett. 44(15), 3645–3648 (2019). [CrossRef]  

20. P. Song, R. Wang, J. Zhu, et al., “Super-resolved multispectral lensless microscopy via angle-tilted, wavelength-multiplexed ptychographic modulation,” Opt. Lett. 45(13), 3486–3489 (2020). [CrossRef]  

21. P. Song, C. Guo, S. Jiang, et al., “Optofluidic ptychography on a chip,” Lab Chip 21(23), 4549–4556 (2021). [CrossRef]  

22. Z. Bian, S. Jiang, P. Song, et al., “Ptychographic modulation engine: a low-cost DIY microscope add-on for coherent super-resolution imaging,” J. Phys. D: Appl. Phys. 53(1), 014005 (2020). [CrossRef]  

23. S. Jiang, C. Guo, T. Wang, et al., “Blood-Coated Sensor for High-Throughput Ptychographic Cytometry on a Blu-ray Disc,” ACS Sens. 7(4), 1058–1067 (2022). [CrossRef]  

24. X. Huang, K. Lauer, J. N. Clark, et al., “Fly-scan ptychography,” Sci. Rep. 5(1), 9074 (2015). [CrossRef]  

25. J. Deng, Y. S. G. Nashed, S. Chen, et al., “Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging,” Opt. Express 23(5), 5438–5451 (2015). [CrossRef]  

26. J. N. Clark, X. Huang, R. J. Harder, et al., “Continuous scanning mode for ptychography,” Opt. Lett. 39(20), 6066–6069 (2014). [CrossRef]  

27. P. M. Pelz, M. Guizar-Sicairos, P. Thibault, et al., “On-the-fly scans for X-ray ptychography,” Appl. Phys. Lett. 105(25), 1 (2014). [CrossRef]  

28. M. Odstrčil, M. Holler, and M. Guizar-Sicairos, “Arbitrary-path fly-scan ptychography,” Opt. Express 26(10), 12585–12593 (2018). [CrossRef]  

29. S. Jiang, J. Zhu, P. Song, et al., “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20(6), 1058–1065 (2020). [CrossRef]  

30. T. Wang, P. Song, S. Jiang, et al., “Remote referencing strategy for high-resolution coded ptychographic imaging,” Opt. Lett. 48(2), 485–488 (2023). [CrossRef]  

31. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

32. G. Zheng, C. Shen, S. Jiang, et al., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

33. C. Guo, S. Jiang, L. Yang, et al., “Depth-multiplexed ptychographic microscopy for high-throughput imaging of stacked bio-specimens on a chip,” Biosens. Bioelectron. 224, 115049 (2023). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       A demonstration of CP operation.
Visualization 2       A demonstration of digital propagation process.

Data availability

The data presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Fly-scan coded ptychographic microscopy via active micro-vibration. (a) Schematic of the reported platform, where a fiber-coupled 532 nm laser is used for sample illumination. The coded rolling-shutter image sensor is placed under the sample with a distance ${d_1}$ ∼500 µm. The distance between the coded layer and the detection plane of the image sensor is ∼840 µm (${d_2}$). (b) A thin and dense layer of microbeads is directly coated on top of the image sensor. The clear region of the coded layer is used for positional tracking of the sample or the coded sensor. (c) Two ERM vibrational motors are used to produce active micro-vibration to the sample stage or the sensor stage. Each ERM motor is used for each set of the flexure to actuate the stage in its corresponding direction. (d) The prototype platform. Refer to Visualization 1 for a demonstration of its operation. (e) The reconstruction procedure of the reported schema. First, the captured distorted raw images are corrected via the rolling-shutter distortion correction algorithm. Then, a conventional ptychographic phase retrieval is performed for the high-resolution complex-valued object reconstruction.
Fig. 2.
Fig. 2. Outline of the rolling-shutter distortion correction algorithm.
Fig. 3.
Fig. 3. The calculated random scanning positions without (a) and with (b) rolling-shutter distortion correction. The different colors indicate different segments: blue, black and red mean top, middle and bottom, respectively.
Fig. 4.
Fig. 4. Ptychographic reconstruction without and with rolling-shutter distortion correction. (a) Recovered large field-of-view amplitude image with distorted raw images. (b1)-(d1) Three zoomed-in views of (a). (b2)-(d2) Three zoomed-in views of (e). (e) Recovered large field-of-view amplitude image with corrected raw images.
Fig. 5.
Fig. 5. Validation using a USAF resolution target. (a) Captured raw image of the resolution target. (b) Reconstruction for the resolution target. We can resolve 0.62 µm half-pitch linewidth in group 9, element 5. (c) The normalized intensity profiles of the red and blue lines (group 9, element 5) shown in (b2).
Fig. 6.
Fig. 6. Validation for quantitative phase imaging. Captured raw image (a1) and recovered quantitative phase (a2) of the phase target. (b) The line trace of the red dash circle in (a2). The ground-truth radian is ∼1.53. Captured raw image (c1) and recovered phase profile (c2) of the unstained mouse kidney slide.
Fig. 7.
Fig. 7. Testing the 3D digital refocusing capability of the reported platform using a thick potato sample. (a) The recovered amplitude of the object’s exit wavefront. (b) The four images after digitally propagating to four different axial positions. Refer to Visualization 2 for a demonstration of digital propagation process.
Fig. 8.
Fig. 8. The large field-of-view and high-resolution reconstruction of a blood smear sample. (a) The large field-of-view image of the recovered object amplitude. (b)-(d) Magnified views of the recovered amplitude and phase images of regions b-d.
Fig. 9.
Fig. 9. The large field-of-view and high-resolution reconstruction of a brain tumor section. (a) Large field-of-view phase image. (b) Zoomed-in view of (a). (c1-c2) Recovered phase. (d1-d2) Recovered intensity. (e1-e2) Cell segmentation masks. (f1-f2) Ground-truth images captured using a 20×/0.5 NA objective.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

I i d i s t o r t ( x , y ) = | [ ( O ( x x i , y y i ) p s f d 1 ) c l ( x , y ) ] p s f d 2 | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.