Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Super-resolution via iterative phase retrieval for blurred and saturated biological images

Open Access Open Access

Abstract

One of the most fascinating problems addressed today is retrieving high-resolution data of blurred images obtained from biological objects. In most cases the research relays either on a priory knowledge of the image nature or a large number of images (either of the same object or of different objects obtained by the same imaging setup). If saturation is added to the blurring, most algorithms fail to sharpen the image and in some cases researchers decline to use such images as an input. In this work a single captured blurred and saturated image is given with no a priori knowledge except of the fact that the primary blurring is due to defocused imaging setup. The authors suggest a novel three-stage approach for retrieving higher resolution data from the intensity distribution of the blurred and saturated image. The core of the process is the phase retrieval algorithm suggested by Gerchberg and Saxton in 1972. The new method is explained in details and the algorithm is tested numerically and experimentally on several images to show the improvement in the sharpness of the spatial details.

©2008 Optical Society of America

1. Introduction

In recent years super resolution (SR) has become an extensive research subject by many researchers. SR refers to the recovery of high resolution data from images that due to compression, defocus, or other forms of distortion have lost the high frequencies data that was originally embedded in the image. The methods to overcome this problem of data loss, and generating super resolved imaging, are quite versatile. In some cases the method is to obtain data about the blurring function and to use an inverse filter to reconstruct the high-resolution image [1,2]. Unfortunately, two main problems limit this approach. First, usually it is impossible to identify the exact blurring function since it is a result of stochastic noise and thus only its statistical properties are known. Second, even if the blurring function is known making an inverse filter might not be practical due to noise (e.g. if the original blurring filter has low values the inverse filter will amplify the noise).

Other methods use large databases; they are divided into two groups. In the first group [3,4] one takes a large amount of different test-images that exist in the database both in low and high resolution and tries to find the deblurring procedure that will yield the best results with respect to all images. There are two problems with this approach, no two pictures are identical and therefore one cannot be sure that the extracted inverse blurring procedure will be applicable for the required new image, and, usually the blurring procedure varies from one test-image to another and thus the “anti-blurring” filter will be an average of many filters, and not an exact filter solution.

The second group of SR techniques uses large database of many low-resolution pictures of the required subject [5–9]. Since in every picture a different portion of the high-resolution data is missing it is possible to extract some high-resolution data from these images and to obtain a single high-resolution image. The main drawback of these methods is the large database required in order to increase the resolution of a single image.

Finally, all the methods mentioned above are used when some sort of pseudo-linear distortion is present. If one adds saturation to the equation most methods fail completely due to the non-linear nature of saturation. However, saturation is a none-invited guest in many images and one has to find a way to reconstruct the lost data in the saturated regions as well.

In this work we suggest a novel approach assuming only one given image – the blurred and saturated input image obtained by the CCD. The basic assumptions are: (1) most of the blurring is caused by defocusing in an optical imaging setup, and therefore finding the infocus image plane is a primary objective, (2) the saturated regions have low spatial frequencies and hence there is no point in trying to extract data from these regions. The data to be formed in these areas must be a result of an iterative procedure making use of other regions of the image, where higher frequencies are present.

If one has the complete data of the image obtained (i.e. magnitude and phase) a reconstruction of the image in different imaging planes is quite simple (since defocus can be translated to a spherical phase in the Fourier domain), however, in the addressed case only the intensity distribution is known. In this research work the authors use a three-stage approach to retrieve higher resolution data from the intensity distribution of the blurred image. The first stage includes adding a phase distribution to the magnitude distribution (obtained directly from the intensity distribution). This is done by means of the phase retrieval algorithm suggested by Gerchberg and Saxton in 1972 [10], while we add the requirement of obtaining a wider spectral distribution for the intensity than the one obtained without the phase distribution. Once this objective is achieved one can turn to the second stage of the proposed algorithm in which the blurred image (with its adjacent phase distribution) is virtually placed at the imaging plane of an optical imaging setup and the imaging setup is tuned to give a high-resolution image at its output plane. Since there is a range of distances where the obtained image will contain higher frequencies than the original object, the authors use a decision criterion maximizing the spectral distribution while rejecting high frequencies that are not a result of the required object, but rather a result of the generated phase distribution. In the third and final stage the authors eliminate high-frequency noise from the image to obtain the required result.

The proposed image processing approach allows super resolved imaging. The obtained SR can not only provide more spatial information in comparison to the original de-focused and saturated image but also to extract (by the proposed extrapolation) even sub wavelength features due to the correlation that exists between them and their spatial neighbors.

2. Iterative single image super-resolution

2.1 The concept

When using a single image, one must have some sort of a priori knowledge when reconstruction is required. In the suggested reconstruction technique what the authors know (or guess) beforehand is that blurring is mainly a result of defocus at the output of an optical imaging setup. Such an aberration is expected when taking a picture of a biological 3-D object, since the 3-D structure does not allow the entire image to be in-focus. Since only the intensity image is available we obtain its magnitude, i.e., take the square root, and attach a random phase to this magnitude distribution in order to generate a complex field distribution. Under the assumption of an optical imaging setup with a defocus aberration, the focused image can be obtained from the given image, assuming the added phase distribution is the actual one obtained by the setup, simply by its free-space propagating to the focal plane. Since we do not know the actual phase distribution we must impose a certain constraint on the spectral distribution of the complex amplitude image, such as spectral bandwidth that is larger than the current one. Thus, we have two restrictions, the magnitude at the image plane and the bandwidth at the Fourier plane.

These two restrictions allow us to bounce back and forth from the image-domain to its spatial frequency-domain, with a procedure known as iterative phase retrieval (as shown in the following subsection). As will be shown later, the saturation problem is solved as the saturated regions are filled with data induced by the spatial structures in the other regions.

2.2 Review: Iterative phase retrieval

A well-known problem is to determine the phase of a phase only object plane filter that will produce a required intensity distribution in the Fourier domain. In their paper Gerchberg and Saxton [10] suggested an iterative approach to do just that. This method is proven to converge to a phase filter with a minimal mean square error (MSE) [11].

The concept is quite simple: We start with an arbitrary phase-only filter in the object domain multiplying the input object (the original image), after a Fourier transform we obtain a Fourier domain image and we impose the require Fourier intensity (actually the magnitude), leaving the phase as is. An inverse Fourier transform brings us back to the object domain. Since we demand a phase-only filter we impose the intensity of the input object in this plane. Next we calculate the Fourier transform and return to the Fourier domain, and so on. This procedure is required since using only the phase of the complex filter, that converts the input image exactly to the Fourier image, gives poor results. As can be seen, if we impose half of the information (intensity or phase) in both the input and the output domains the procedure converges monotonically.

In later work Gerchberg [12] and Papoulis [13] suggested the use of this method for SR. However, both presented relatively simple test cases and assumed the properties of all iterations to be identical (accept when noise reduction was addressed). An improved Gerchberg-Papoulis algorithm was recently suggested by Gur and Zalevsky [14]; however, it supplies good result only if the blurred image is actually a lower resolution version of the required image. There are of course other methods for obtaining the phase filter, such as simulated annealing [15], which ensures that the MSE has indeed a global minimum, but it is time and resources consuming. A more thorough discussion on the phase retrieval problem is given by Rodenburg [16]. Figure 1 illustrates the original Gerchberg-Saxton algorithm.

 figure: Fig. 1.

Fig. 1. The Gerchberg-Saxton algorithm. Ain, φin are the input plane amplitude and phase respectively, Aout, φout are the Fourier plane amplitude and phase respectively.

Download Full Size | PDF

2.3 Super-resolution via phase retrieval

After reviewing the phase-retrieval algorithm we return to the problem at hand. The solution can be divided into 3 stages. In the first stage, a phase retrieval procedure is required to obtain a phase distribution for the blurred image. In the image-plane the restriction is quite simple – the exact magnitude of the given image. However, at the Fourier plane the restrictions are more flexible. One needs an intensity distribution, which is wider than the one obtained from the phaseless-image and at the same time has a physical justification. The intuitive solution is to try and extrapolate the low frequencies data to obtain a wider distribution with the same low frequencies as the original image. This might be a good solution if the image is a low-resolution version of the required image but not for a blurred and saturated image; since the low frequency data in the blurred image is completely different than the original one (e.g. the saturation has added significant energy to the lower frequencies). For this reason we impose a medium-width Gaussian intensity distribution at the Fourier plane. This way we can ensure that higher frequencies are present in the reconstructed image without stipulating on the structure of the lower ones. The Gaussian choice is also reasonable assuming an optical setup with a Gaussian laser beam distribution (that influences the blurring).

At the second stage, after applying the Gerchberg-Saxton algorithm, one has to take the new complex amplitude image and form a sharper image (note that adding phase did not alter the blurriness of the intensity). At this stage the suggested algorithm adds a spherical phase to the Fourier transform of the complex amplitude image to obtain (after an inverse Fourier transform) a new complex amplitude. This phase distribution is required to overcome the opposite phase distribution representing the defocus, given in Eq. (1),

P~(p,q)=P(p,q)·exp[jk(1di+1do1f)(p2+q2)]

where P(p,q) is the aperture in the (p,q) plane, di and do are the distances between the image or the object (respectively) to the imaging lens and k=/λ with λ being the optical wavelength.

The coefficient of the phase increases (or decreases) until sufficient sharpness is achieved. To define the term of “sufficient” we note that a sharper image has a larger amount of energy in higher frequencies. However, in the discussed case, some of the extremely high frequencies are a result of imposing a Gaussian intensity distribution that has very little in common with the original intensity distribution in the Fourier domain. Hence an optimization is required. Fortunately, the optimization is quite simple: maximizing the energy ratio between the entire high frequencies region and the very-high frequencies region, as shown in Eq. (2) and demonstrated in Fig. 2. Both in Eq. (2) and in Fig. 2, LF stands for low frequencies (present in the original blurred image), HF stands for high frequencies (reconstructed by the algorithm), VHF stands for very high frequencies (generated by the reconstruction process though not related to the original image), and UHF stands for ultra high frequencies (usually distorted by aliasing and thus removed).

x=maxxX{Energy(HF+LF)Energy(VHF)}

One must remember that the Fourier domain in which these calculations are made is the result of Fourier transforming of only the magnitude of the image (excluding the phase) since this is the distribution that is recorded on a CCD detector.

 figure: Fig. 2.

Fig. 2. Frequency regions: Central area contains required frequencies, both LF and HF recently generated. Intermediate area contains undesired by-product of optimization. Exterior contains region to be padded with zeros.

Download Full Size | PDF

Finally the optimal correction is obtained, but very-high frequencies are evident in the reconstructed image as crater-like holes. Thus, a third and final stage is required. In the third stage each pixel value (magnitude) is replaced by the weighted average of its eight closest neighbors, thus eliminating most of the crater noise (and unfortunately a small portion of the required high frequencies). To complete the procedure, the algorithm is performed for the second time, using a narrower Gaussian distribution at the phase retrieval stage. This result, though containing less spatial details than the previous one, with fewer holes, and more important differently located than the holes in the previous result. Thus, any remaining holes are replaced by the values of the second image.

The computational cost of this approach is mainly time consumption, as memory requirements are much lower than for any algorithm using a range of images rather than a single image. The algorithm suggests a tradeoff between image quality and computation time, in two different levels. First, if the initial blurred image is given at higher resolution (i.e. a larger matrix) then more data can be extracted from this image but the phase retrieval procedure becomes slow (as the FFT has to deal with larger data). Second, when determining the sharpest image smaller steps lead to a better image quality but require larger amount of time. For both cases, time consumption increases with image quality.

3. Computer simulations

Before proceeding to the experimental results we must check the feasibility of the suggested algorithm. To do so we generate a synthetic blurred image. We start with 128×128 Lena image depicted in Fig. 3(a) and simulate (using Matlab®) free space propagation from the image plane. Figure 3(b) demonstrates the result obtained by simulating a 2cm by 2cm object propagating a distance of 10cm when using a coherent light source at wavelength of 632nm.

 figure: Fig. 3.

Fig. 3. (a). Original 128 by 128 gray-scale object. (b). Blurred out-of-focus image obtained by free space propagation simulation.

Download Full Size | PDF

As seen in Fig. 3, the blurred image has lower frequency content than the original image and the blurring process also causes contrast reversal for several spatial frequencies, as expected in an imaging setup (note the lips and the eyes). Next we use the proposed algorithm which, not surprisingly, finds the focal plane reconstructed image to be the sharpest of all. The reconstructed results are given in Fig. 4(a) before noise reduction and in Fig. 4(b) after attempting to reduce the noise using the previously described approach.

 figure: Fig. 4.

Fig. 4. (a). Reconstructed image without noise reduction. (b). Reconstructed image after noise reduction.

Download Full Size | PDF

As seen from Fig. 4, not only that the noise reduction does not eliminate all of the noise, it also eliminates some of the required high resolution details. Thus, we can expect that the algorithm will result with either a sharp but noisy image or a slightly “cleaner” image but with less detail. A more quantitative measure of the improvement obtained by the suggested algorithm can be made by calculating the mean square errors (MSEs). The MSE between the original object and the reconstructed image, shown in Fig. 3(a) and Fig. 4(b), respectively, is 15.7 times smaller than the MSE between the original object and the blurred image, shown in Fig. 3(a) and Fig. 3(b), respectively.

4. Experimental results

We have applied the proposed algorithm over a set of images that were captured using a freezing fluorescence microscope. The cryo-imaging system is based on an Olympus IX70 inverted fluorescence microscope with 100W U-LH100HG mercury lamp as a source of excitation radiation (Olympus, Japan). The long-distance focal plane of the objectives (PlanFluorit LCPLFL, 60x, 0.7 NA., Olympus, Japan, and 10x, 0.24 NA., PZO, Poland) are located outside the optical transmission micro-cryostat (MMR Technologies, USA) where thin sections of the specimen are placed in glycerol and cooled to 80°K by expanding high-pressure nitrogen. The microscope is equipped with an Olympus U-MSWB filter cube set (Olympus, Japan) containing a 420–480nm band-pass excitation filter, 715nm long-pass filter and 500nm edge dichroic mirror.

The specimens that were observed were of Maize (Zea mays) which is one of a large number of tropical grasses and cereal plants that are C4 plants [17,18]. The harvested, fully developed leaves were hand-chopped in ice-cold isolation medium to thin sections which were then investigated. For experiments with isolated intact bundle sheath cells and mesophyll, these cells were obtained by enzymatic digestion with cellulase. Finally the products of these isolations were also, after being broken, used for fluorescence investigation of intact chloroplasts as smallest functional photosynthetic compartment of leaves. All samples were embedded on glass of cryostat holder with glycerol which inhibits the creating of ice crystals when liquid nitrogen temperature was used. The material was fixed in the cutting water, with the σi of chloroplast isolation medium modified by the inclusion of 0.5% glutatasldehyde. Chlorophyll fluorescence of maize samples was excited by blue light of the mercury lamp defined by the excitation filter of the Olympus U-MSWB filter cube.

The captured images are partly blurred due to many pieces of glass enrooted to the object and followed by super imposing of different panes of the fluorescent images upon one another due to the high depth of focus (since the images are not obtained by confocal methods). The real images which one obtains when generating a high resolution image of the chloroplasts without freezing (such that one can get closer) should have much higher quality and contain more spatial details (this is true for water immersion objectives and also for confocal images).

The algorithm that is suggested by the authors was applied over the image depicted in Fig. 5 and which was captured using the above detailed technique.

 figure: Fig. 5.

Fig. 5. Original 128 by 128 blurred and saturated gray-scale image as captured by a freezing fluorescence microscope.

Download Full Size | PDF

The magnitude distribution imposed on the Fourier plane in the Gerchberg-Saxton algorithm is given in Fig. 6(a) and the resulting magnitude after 100 iterations is presented in Fig. 6(b). Note that the obtained Fourier transform does not contain data in the entire central lobe of the Gaussian, since some frequencies do not exist in the original image.

 figure: Fig. 6.

Fig. 6. (a). Gaussian magnitude is imposed. The image is 256 by 256 pixels and it includes zero padding. (b). The obtained magnitude distribution.

Download Full Size | PDF

The result of stage 2 including finding of the optimal focal-plane image, is given in Fig. 7(a). The result of the third, and final, stage is presented in Fig. 7(b). As seen from the resulted images the suggested procedure revealed new patterns not only in the blurred regions but also in regions that were originally in deep saturation.

In order to test the quality of the obtained results we compared them with one common single image technique that is suggested in the Matlab® blind deconvolution function ‘deconvblind’. We used this function with an initial Gaussian point spread function (PSF), with size of 3 by 3 pixels and standard deviation of 0.4 of a pixel.

 figure: Fig. 7.

Fig. 7. (a). Image intensity obtained at optimal focal plane, after second stage of algorithm. (b). Final intensity distribution of the required image.

Download Full Size | PDF

The result is shown in Fig. 8(a). As can be seen the resulting image only improves the contrast, with respect to the original image, but does not add new spatial information. Figure 8(b) presents the outcome of the new algorithm using the deconvolved image of Fig. 8(a) at the final stage of the reconstruction. As seen from the image, this does not improve the result in any noticeable way. The result of executing the algorithm on the deconvolved image as an initial input is not shown since it contains very little data due to the fact that the deconvolved image inhibits some of the more important high frequencies present in the original image.

 figure: Fig. 8.

Fig. 8. (a). Intensity image that is obtained by Matlab® built-in blind deconvolution function. (b). The final intensity distribution of the required image when using data from the deconvolved image.

Download Full Size | PDF

In Fig. 9 we apply the proposed technique on a different type of image (again taken by the same microscope as previously described) as an input for the algorithm. The figure shows both the original intensity distribution as well as the recovered one. Once again one may see how the proposed approach reconstructs high resolution information despite the blurring and the saturation distortions.

 figure: Fig. 9.

Fig. 9. (a). The original image containing very small amount of spatial details. (b). The reconstructed image that contains significantly larger amount of spatial information.

Download Full Size | PDF

Finally, Fig. 10 contains a portion of the object seen in Fig. 9, zoomed X6, as an input for the algorithm. The figure shows both the original intensity distribution as well as the recovered one. In this case a larger portion of the initial image is saturated, thus larger regions remain without sharp reconstruction.

 figure: Fig. 10.

Fig. 10. (a). Original image with very small amount of spatial details (b). The reconstructed image with more spatial details.

Download Full Size | PDF

5. Conclusion

In this paper the authors suggested a novel technique for single-image reconstruction. The proposed technique makes use of the Gerchberg-Saxton algorithm and eventually allows obtaining fine details out of blurred and saturated single image.

The algorithm was tested on several cell images, since biological samples data are an excellent example for images where the 3-D initial structure of the cell prevents the ability to obtain fully focused picture. In many cases capturing of such cell-images requires extra lighting and resulting in saturated regions. Despite of the blurring and the saturation, the proposed algorithm succeeds in revealing parts of the initial structure of the required image, doing so without using any object-oriented information.

The performance of the proposed approach was compared with a conventional deconvolution algorithm and it showed significantly improved reconstruction of lost spatial information.

References and links

1. S. Bellini, “Blind deconvolution,” S. Haykin (Editor), chapter 2, pp. 8–55 (1994).

2. N. K. Bose, “Wavelet-based blind super resolution from video sequence and in MRI,” Pennsylvania State University - Final Report (2005).

3. W. T. Freeman, T. R. Jones, and E. C. Pasztora, “Example-based super-rsolution,” IEEE Comput. Graphics Appl. 22, 56–65 (2002). [CrossRef]  

4. H. Chang, D-Y Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’04), vol. 1, pp. I-275–I-282, Washington, DC, USA, June–July 2004.

5. M. Elad and A. Feuer, “Restoration of a single super resolution image from several blurred, noisy, and under sampled measured images,” IEEE Trans. Image Process. 6, 1646–1658 (1997). [CrossRef]   [PubMed]  

6. A. Zomet and S. Peleg, “Multi-sensor super-resolution,” IEEE workshop on Applications of Computer Vision (WACV02), 27–31 (2002).

7. D. Rajan, S. Chaudhuri, and M.V. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003). [CrossRef]  

8. M. K. Ng and N. K. Bose, “Mathematical analysis of super-resolution methodology,” IEEE Signal Process. Mag. 20, 62–74 (2003). [CrossRef]  

9. D. Capel and A. Zisserman, “Computer vision applied to super resolution,” IEEE Signal Process. Mag. 20, 75– 86 (2003). [CrossRef]  

10. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for determination of phase from image and diffraction plane picture,” Optik (Stuttgart) 35, 237–246 (1972).

11. J. R. Feinup, “Phase retrieval algorithms - a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]  

12. R. W. Gerchberg, “Super-resolution through error energy reduction,” Optica Acta 21, 709–720 (1974). [CrossRef]  

13. A. Papoulis, “A new algorithm in spectral analysis and band-limited extrapolation,” IEEE Trans. Circuits Syst. 22, 735–742 (1975). [CrossRef]  

14. E. Gur and Z. Zalevsky, “Iterative single-image digital super-resolution using partial high-resolution data,” Lecture Notes in Engineering and Computer Science, WCE2007, 630–634 (2007).

15. S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science 220, 671–679 (1983). [CrossRef]   [PubMed]  

16. J. M. Rodenburg, “The phase problem, microdiffraction and wavelength-limited resolution,” ltramicroscopy 27, 413–422 (1989). [CrossRef]  

17. G. Haberlandt, “Vergleichende anatomie des assimilatorischen gewebesystems bei pflanzen,” Jahrbuch der Wissenschaftlichen Botanik 13, 74–188 (1881).

18. E. Williams, “Fine structure of vascular and epidermal plastids of the mature maize leaf,” Protoplasma 79, 395–400 (1974). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The Gerchberg-Saxton algorithm. Ain, φin are the input plane amplitude and phase respectively, Aout, φout are the Fourier plane amplitude and phase respectively.
Fig. 2.
Fig. 2. Frequency regions: Central area contains required frequencies, both LF and HF recently generated. Intermediate area contains undesired by-product of optimization. Exterior contains region to be padded with zeros.
Fig. 3.
Fig. 3. (a). Original 128 by 128 gray-scale object. (b). Blurred out-of-focus image obtained by free space propagation simulation.
Fig. 4.
Fig. 4. (a). Reconstructed image without noise reduction. (b). Reconstructed image after noise reduction.
Fig. 5.
Fig. 5. Original 128 by 128 blurred and saturated gray-scale image as captured by a freezing fluorescence microscope.
Fig. 6.
Fig. 6. (a). Gaussian magnitude is imposed. The image is 256 by 256 pixels and it includes zero padding. (b). The obtained magnitude distribution.
Fig. 7.
Fig. 7. (a). Image intensity obtained at optimal focal plane, after second stage of algorithm. (b). Final intensity distribution of the required image.
Fig. 8.
Fig. 8. (a). Intensity image that is obtained by Matlab® built-in blind deconvolution function. (b). The final intensity distribution of the required image when using data from the deconvolved image.
Fig. 9.
Fig. 9. (a). The original image containing very small amount of spatial details. (b). The reconstructed image that contains significantly larger amount of spatial information.
Fig. 10.
Fig. 10. (a). Original image with very small amount of spatial details (b). The reconstructed image with more spatial details.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

P ~ ( p , q ) = P ( p , q ) · exp [ jk ( 1 d i + 1 d o 1 f ) ( p 2 + q 2 ) ]
x = max x X { Energy ( HF + LF ) Energy ( VHF ) }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.