Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reconstruction of distorted underwater images using robust registration

Open Access Open Access

Abstract

Imaging through a fluctuating air-water surface is a challenging task since light rays bent by unknown amounts lead to complex geometric distortions. This paper presents a new algorithm to undistort dynamic refractive effects. An iterative robust registration algorithm is employed to overcome the structural turbulence of the waves of the frames by registering each frame to a reference frame. To get the high-quality reference frame, the image is reconstructed by the patches selected from the sequence frames. A blind deconvolution algorithm is also performed to improve the reference frame. Experiments show our method exhibits significant improvement over other methods.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

When imaging through a fluctuating air-water surface, the captured images usually contain severe distortions. Assuming the water is shallow and clear, a complex series of refractions and reflections throughout the imaging path are the main factors. According to Snell’s law, light rays will bend when they pass through the air-water interface. But because of the unknown random fluctuations, the degrees of light rays bending in different regions are unpredictable. When there is no available prior knowledge of the distortions, the reconstruction of the distorted underwater scene constitutes a big challenge.

The researches on this challenging problem have been carried out for decades. Several attempts have been done to recover the undistorted images from a sequence of distorted frames. Averaging the sequence works well when the waves are slight. But when the waves are strong, the mean image will be extremely blurry. In the earlier works, better approaches are to find the regions with the least distortions by manifold embedding [1], clustering [2], or the bispectrum technique [3]. Later, a model-based tracking method [4] is proposed to simultaneously recover both the water surface and the underwater distorted scene. Recently, some image registration based methods are proposed in [5–9].

During the registration process, the mean image is usually used as a reference frame. But actually the frame to mean registration process is impeded by the severely blurred mean. The Gaussian blurred frames can be employed instead to make the registration process work well [5]. Once the blurry regions in the mean are also blurred in the frames, the registration process is accordingly guided to focus on the sharper regions of the mean rather than the corrupted blurry regions. Although this method works well for turbulence mitigation, it is prone to introduce unexpected misalignments during the registration process.

In this paper, a new image restoration approach is proposed for the video sequence distorted by an undulating water surface. We use an iterative robust registration algorithm to remove most of distortions in the frames. Instead of registering each frame against the severely blurry mean, a patches fusion process is first employed to discard the patches with severer distortions and reconstruct the high quality image; and then the fused image is deblurred with the blind deconvolution process. After the registration process, the residuary unstructured noises are eliminated by the robust principal component analysis. Compared with the state-of-the-art method [5], our method performs more stably and has shorter processing time.

The rest of the article is organized as follows: section 2 introduces some related works, section 3 introduces the proposed image restoration method in detail, section 4 presents the summary of the restoration algorithm, section 5 presents the experimental results and analysis, and section 6 concludes the paper.

2. Related work

According to Cox-Munk law [10], when the water surface is sufficiently large and the waves are stationary, the normals of the water surface are approximately Gaussian distribution. Inspired by the law, some approaches focus on finding the center of the distribution of patches from the video sequence as the orthoscopic patch. These approaches are known as the “lucky imaging technique”. Efors et al. [1] formulate the reconstruction problem as a manifold embedding problem, where the patch with the shortest global distance is selected as the undistorted patch at each spatial location. Following [1], Donate et al. [2] add a multi-stage clustering algorithm to progressively remove the patches containing large amounts of translation and motion blur separately. Wen et al. [11,12] cast the reconstruction problem as a phase recovery task using the bispectrum technique. Later in [3], Wen et al. combine the phase recovery task with the lucky region selection operation, where the regions with lower average bicoherence values are selected as lucky regions. Nicolas et al. [13] propose an online restoration approach, where a temporal infinite impulse response filter is first applied to stabilize the video and then a spatial Wiener deconvolution filter is applied to deblur the frames.

Some researchers attempt to estimate the structure of water surface by applying the optical flow [14], by measuring the radial slope [15] or by using a multi-viewpoint model of camera [16]. Turalev et al. [17] use different illumination sources on the bottom object and the water surface respectively, and then extract the information of the surface slopes to correct the refraction distortions. In [4], Tian et al. first use a simple wave equation to build a compact spatial distortion model of the water surface, and then use the water bases generated by the model to inversely reconstruct an undistorted image. The algorithms later proposed in [18,19] by Tian et al. can handle more complex distortions, including the water waves. Seemakurthy et al. [20] focused on removing the distortions associated with motion blur induced by unidirectional cyclic waves and circular ripples. Their algorithm uses the dynamic nature of water surface and needs a single blurred observation under certain modest constraints.

Some researchers apply the image registration to recover the underwater scene. Oreifej et al. [5] present a two-stage restoration approach, where the first stage uses an iterative robust registration algorithm to overcome the structural turbulence of waves and the second stage applies the rank minimization to eliminate the sparse noise. Halder et al. [21] propose a restoration method using the pixel shift maps. Their method is similar to [5], because they both use an image registration to calculate the pixel shift maps that are later used to warp the frames. But Halder et al. use the mean of the individual pixel shift maps to dewarp the frames. Recently in [6], they propose to select the sharpest frame as the reference frame and calculate the shift maps with a set of subframes with higher sharpness. In [22], they apply the generalized regression neural network to predict the warping of upcoming frames of the underwater scene, where the motion fields are achieved by the optical flow. Kanaev et al. combine the optic flow based image warping with the lucky patch selection to restore the images captured above the surface [23] and underwater [24]. Recently in [7], they develop a structure tensor oriented image quality metric for the lucky patch selection to improve the performance. Following [5], Li et al. [8] select the best image patches from the registered frames to reconstruct an undistorted image. Zhang et al. [9] propose a frame-by-frame shift maps accumulation based restoration scheme, which can mitigate distortion and blur progressively.

We propose a robust method which is similar to the state-of-the-art method [5]. But in detail we try a different way to make the registration process work. And our approach is to pursue better performance and shorter processing time.

3. Proposed image restoration method

The proposed image restoration method is illustrated as the flow chart in Fig. 1. For a video sequence V = {I1,...,In} distorted by a moving water surface, where Ik∈ℝh × w (k = 1,2,...,n), our goal is to recover an undistorted sequence F = {If1,...,Ifn}. The robust registration algorithm contains three major steps:

 figure: Fig. 1

Fig. 1 Simple block diagram of the proposed image restoration method.

Download Full Size | PDF

  • 1. Patches fusion: the severely distorted image patches within each frame and across the entire video sequence are clustered and discarded, and then the rest patches are combined and fused to a single less distorted image;
  • 2. Blind deconvolution: an accelerated Richardson-Lucy deconvolution algorithm is employed to deblur the reconstructed image as the reference M of the registration process;
  • 3. Image registration: by registering each frame against the reference, a non-rigid registration algorithm is employed to dewarp the frames;

After applying the robust registration algorithm several times, the frames obtained still contain several misalignments and noises. The robust principal component analysis technique is employed as the post-processing to refine the frames.

3.1. Principles

According to Cox-Munk law [10], after observing the underwater object with a camera for a certain time, the shift of each pixel of an image centers at its true position. The temporal mean of the pixel shifts should be zero. In other words, each pixel of the mean should be on its true position. In practice, for a period of time, the random fluctuations will cause that not all the points of the object have a mean shift equal to zero. Although the mean image contains some pixels at their true positions, it also contains some pixels far from their true positions. These malposed pixels hinder the standard mean-to-frame registration. Deblurring the mean has limited effectiveness, especially when the water fluctuates greatly. On account of this, we decide to eliminate these malposed pixels from the mean image and guide the registration process to focus on the pixels at their true positions.

3.2. Patches fusion

In the beginning, the input video frames are divided into a set of smaller patches of the same size, with a 50% overlap between each pair of adjacent image patches. Based on the structural similarity index (SSIM) [25], we pick out and then discard those severely distorted patches within each patch sequence. SSIM assesses the visual impact of three characteristics of an image: luminance, contrast and structure, and can be expressed as

SSIM(I,M)=(2μIμM+C1)(2σIM+C2)(μI2+μM2+C1)+(σI2+σM2+C2),
where μI,μM,σI,σM,σIM are the local means, standard deviations and cross-covariance for images I,M. In our algorithm, SSIM is calculated between the patch I and the mean patch M of the current patch sequence. C1 and C2 are constants, which are defined as C1=(0.01×L)2 and C2=(0.03×L)2, where L is the specified dynamic range value.

The classification process is completed by the K-means clustering algorithm. For each patch sequence, the cluster with higher SSIM values is preserved and averaged to a single patch. Then place all the mean patches on their true positions to form a single overlapped image. In addition, when stitching the patches together, a two-dimensional weighted Hanning window [3] is applied to each patch. In this way, the boundary effects can be mitigated.

3.3. Blind deconvolution

The image reconstructed by the patches fusion is still blurry but with less severe malposed pixels, therefore the image needs to be further improved. Here, a blind deconvolution technique come from [26] is applied to deblur the reconstructed image. In general, the iterative Richardson–Lucy deconvolution algorithm can be modeled as

fk+1=fk(hkghkfk)Ψ(fk),
where g is the distorted noisy image, f is the original undistorted image, h is the point spread function of the system, is the convolution operator, is the correlation operator, k is the iteration number, and Ψ() represents the optimization function.

In practice, Eq. (3) needs several hundred iterations before convergence is reached. Therefore, based on the principles of vector extrapolation [26], an acceleration strategy is employed to speed up the convergence process:

bk=ak+λkδk
where δkakak1,ak+1=bk+θk,θkΨ(bk)bk, and λk=(θk1θk2)/(θk2θk2).

In addition, the initial point spread function and the maximal number of iterations has to be set before the deconvolution algorithm. In our algorithm, the initial point spread function is an identity matrix of size three, and the maximal number of iterations is ten.

3.4. Image registration

Similar to [27], a nonrigid B-spline based image registration technique is applied to dewarp the distorted frames. The basic idea of the B-spline based free-form deformation is to deform the moving image into the reference by manipulating an underlying mesh of control points. The motion of each point in the moving image is obtained by the interpolation transformation of the motion at the control points. For an image domain Ω={(x,y)|0xX,0yY}, if we denote Φ as a nx×ny mesh of control points ϕi,j with uniform spacing, the free-form deformation can be formulated as

T(x,y)=m=03l=03Bm(v)Bl(u)ϕi+l,j+m,
where i=x/nx1, j=y/ny1, u=x/nxx/nx, v=y/nyy/ny and Bl is the lthstandard cubic B-spline base function. The cubic B-spline base functions are defined asB0(t)=(1t)3/6,B1(t)=(3t36t2+4)/6,B2(t)=(3t3+3t2+3t+1)/6and B3(t)=t3/6, where 0t<1. These B-spline base functions act as weight functions that weight the contribution of each control point to free-form deformation based on its distance to the point (x,y). To achieve a tradeoff between the model flexibility and the computational cost, the resolution of the mesh of control points is determined in a coarse to fine fashion [28].

The smoothness constraint on free-form deformation is formulated as

Esmooth=1A0X0Y[(2Tx2)2+(2Ty2)2+2(2Txy)2]dxdy,
where A denotes the area of the image. The mean square difference of intensities [29] is applied to measure the similarity of two images and is defined as
Esimilarity=(x,y)Ω(IA(x,y)IB(x,y))2w×h,
where IA and IB denote the intensity of the corresponding pixel in the image A and Brespectively. Using a weighted parameter α to associate the above two terms,

E(Φ)=Esimilarity+αEsmooth.

Using the iterative gradient descent algorithm, we can minimize Eq. (7) to obtain the optimal transformation parameter Φ.

3.5. Post-processing

After performing the modified image registration several times, we can get a sequence of less distorted frames R = {Ir1,...,Irn}. However, the frames still contain several misalignments and random noises that the robust image registration cannot handle. Following [5], the robust principal component analysis technique is applied to produce a noise-free video sequence.

4. Summary of algorithm

Tables Icon

The complete restoration strategy is summarized in algorithm 1.

5. Experiment results and analysis

The proposed method was implemented on MATLAB (MathWorks Co., USA). The data sets and the source codes are publicly available in the following link: https://github.com/yangxuxuxu/underwater-image-reconstruction.git. To verify the performance of the proposed method, we made a comparison with the state-of-the-art method, namely, the Oreifej method [5]. We tested the two methods with the same data sets used in [5]. Actually, these data sets come from [4]. Then we used a sequence of frames collected from the real scenario of seeing through a waving water surface. For both methods, the maximum number of iterations was set to five. In the patches fusion process of our method, the patch size of each data set should be manually set before running the algorithm.

5.1. Results of the same data set as the Oreifej method

The sample frame and mean of each data set are shown in Fig. 2. Each data set contains 61 frames. The frame size of each sequence is 266 × 290 for “brick”, 238 × 285 for “checkboard”, and 272 × 282 for “tiny fonts” respectively, and the corresponding patch size is set as 76 × 116 for “brick”, 68 × 114 for “checkboard” and 68 × 94 for “tiny fonts” respectively.

 figure: Fig. 2

Fig. 2 Sample frame and mean of each data set.

Download Full Size | PDF

As mentioned before, the standard mean to frame registration is hindered by the severely blurry mean. For this problem, the two methods take different paths toward the same goal. The Oreifej method decides to bring the frames to the same blur level of the mean by estimating a blur kernel, and then the registration process is guided to concentrate on the sharper regions of the mean. The defect of the Oreifej method is that the frame blurring process may introduce unexpected local misalignments. After blurring the frames, some edges of the frames are shifted, and at the same time the edges of the mean are also ambiguous. Then the registration process cannot correct the distortions and even leads to unexpected misalignments. With the increasing of iterations, the misalignments may be aggravated. Contrary to the Oreifej method, we decide to reconstruct a sharper and correct image as the reference of the registration process. At each iteration, the frames are shifted closer toward the reference. Then the registered frames can generate better reference for the next iteration. The reconstruction of sharper reference can greatly avoid the misalignments during the registration process. As we can see from Fig. 3, with the increasing of iterations of registration, the outputs of the two methods are getting less distorted and getting sharper. However, for the results of the Oreifej method after several iterations of registration, the regions marked with red rectangles are not well restored by the registration process, and some of the regions are even getting more distorted, such as the results of “brick” and “tiny fonts”. In contrast, the results generated by our method are progressively improved with the increasing of iterations of registration. In consequence, our method is superior to the Oreifej method in terms of the stability of the restoration of distortions.

 figure: Fig. 3

Fig. 3 The mean results of the two methods after different iterations of the registration process. The regions with larger distortions are marked with red rectangles.

Download Full Size | PDF

5.2. Results of quantitative analysis

In [5], the l1 difference between the frames Ik and the mean M of the current sequence V has been used to make a stopping decision for the registration process and is defined in Eq. (8). In the experiments, a fixed threshold of 0.025 on l1 is applied. The results of convergence of the two methods are listed in Table 1. Less iterations means less computational time.

Tables Icon

Table 1. Number of Iterations When the Robust Registration Process Stops

l1(V)=k(x,y)|M(x,y)Ik(x,y)|w×h×n.

From Table 1, we can see that the numbers of iterations of the Oreifej method are unstable in different sequences. Most image sequences need 5 iterations or more to get aligned except “checkboard” sequence. Obviously, “checkboard” sequence has simple detail structure which is easily to be well aligned. Compared with the Oreifej method, our method shows more stability in iterations for all sequences. The average number of iterations of the proposed method is close to four, and is less than that of the Oreifej method. It means that for most detailed image sequences or the image sequences suffering from severe turbulence, the blurring step will bring in extra iterations in the image registration algorithm. In some cases with more severe turbulence, using the blurring step will even lead to the failure of the registration.

Furthermore, to measure the processing time of the methods, we used a laptop computer with an Intel Core i5-4210U processor and 16GB of RAM. The data set is “checkboard”. We get results as follows: The patches fusion step costs 12.17s. The blind deconvolution step costs 3.24s. The image registration step costs totally 401.83s to complete one iteration. Because there are 61 frames in the data set, each frame costs about 6.59s. However, the computational time of one iteration of the Oreifej method is 481.92s. Each frame costs about 7.90s. From the results, we can see that the image registration process is the most time consuming process. The proposed method can shorten the processing time of one iteration and total number of iterations both. At the same time, the proposed method can also increase the success rate of the registration.

Several relevant quality metrics, i.e. the SSIM [25], the mean square error (MSE) and the peak signal-to-noise ratio (PSNR), are used to quantitatively compare the performance of the proposed method and the Tian method [4] and the Oreifej method [5]. The MSE and PSNR are defined in Eqs. (9) and (10) respectively. These three metrics require a distortion-free image as the ground-truth image. As the ground-truth images of the “large fonts” and “checkboard” sequences are unavailable, the blind image quality (BIQ) [30] is employed, which has been used in the lucky patch selection process of [6]. The sharper image tends to have the higher value of the metric. The comparison results are presented in Table 2. It is obvious that our method performs close to the Oreifej method, and both outperform the Tian method.

MSE=(x,y)(M(x,y)G(x,y))2w×h,
PSNR=10log10((MAX(G))2MSE),
where G represents the ground-truth image, and MAX(·) denotes the maximum possible value.

Tables Icon

Table 2. Comparison of Image Quality Metrics among the Proposed Method, the Oreifej method [5] and the Tian method [4]a

5.3. Results of the real through-water scene’s data set

In the scenario of seeing through a waving water surface, a camera was fixed upon a checkboard tablet that was submerged in semi-transparent water tank. The frame rate of the camera was 15 frames per second, and the water waves were generated by a water pump. The frame size of the captured sequence is 276 × 348, the corresponding patch size is 84 × 108. As shown in Fig. 4, the sample frame is distorted and the mean of the sequence is blurry. When dealing with our data, the Oreifej method still fails to handle some local distortions marked in red in Fig. 5(a).

 figure: Fig. 4

Fig. 4 Sample frame and mean of our data set.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 The mean results of the two methods with our data set. The regions with larger distortions are marked with red rectangles.

Download Full Size | PDF

5.4. Analysis of the robust registration

Figure 6 shows the results of the registration using the mean as the reference (Fig. 6(a)) and using the mean deblurred by the blind deconvolution as the reference (Fig. 6(b)). It can be seen that the registration using the mean deblurred by the blind deconvolution as the reference can correct part of the distortions (marked by a yellow rectangle) that cannot be handled by the standard mean-to-frame registration. However, the regions marked by red rectangles indicate that both methods cannot restrain the misalignments occurred during the process of registration. The same problem is faced by the Oreifej method, as shown in Fig. 3. In our robust registration algorithm, after applying the patches fusion to discard some patches with lower image quality, the local misalignments can be well restrained. In consequence, the three steps contained in the robust registration algorithm, i.e. the patches fusion, the blind deconvolution and the registration, are not randomly combined, but tightly connected and complementary. Only the combination of the three steps can achieve the best results.

 figure: Fig. 6

Fig. 6 The mean after applying five iterations of registration. (a) Use the mean without deblurring as the reference (standard mean-to-frame registration). (b) Use the mean deblurred by the blind deconvolution as the reference.

Download Full Size | PDF

5.5. Insight into the image quality metric

In the patches fusion process, the SSIM is selected as the quality metric. We also tested our algorithm with the BIQ metric [30]. The SSIM is a full-reference metric, while the BIQ metric is a no-reference metric. The results in Fig. 7 indicate that the strategy of reconstructing a high quality reference works well in restraining the local misalignments, but the metric used in the patches fusion also affects the final performance of our method. The BIQ measures the sharpness of the given image, whereas the SSIM focuses on the structural information of the image. In terms of the recovery of distortions, the structural information tends to work better than the sharpness when evaluating the quality of the image patch. The results also imply that the proposed method can be improved by selecting better image quality metric.

 figure: Fig. 7

Fig. 7 Mean results with different image quality metrics.

Download Full Size | PDF

6. Conclusion

This article proposes a new robust registration based algorithm for the restoration of images distorted by a fluctuating water surface. In our robust registration, we decide to reconstruct a high quality reference rather than directly deblurring the mean. Firstly, using the SSIM as the image quality metric, the regions with lower quality across the whole frames are picked out and discarded and the regions with higher quality are fused into a single image. Then the blind deconvolution is employed to deblur the reconstructed image. During the process of registration, some unexpected misalignments can be restrained by using the reconstructed image as reference. With the increasing of the iteration of registration, the frames can be stably and progressively improved.

We have implemented the method on the data sets from [4] and from a real scenario of seeing through a waving water surface. Compared with the state-of-the-art method [5], our method performs better in the recovery of distortions. As the iteration of registration increases, the method in [5] fails to handle some local distortions, and several unexpected misalignments are even progressively aggravated. In contrast, our method can stably restore the distortions without introducing unexpected misalignments. After testing the proposed method with another image quality metric, the results imply that the proper image quality may improve the performance of the method. Finding the proper image quality will be further investigated in the future.

Funding

Science and Technology Commission of Shanghai Municipality (18JC1410402).

References

1. A. Efros, V. Isler, J. Shi, and M. Visontai, “Seeing through water,” in Proceedings of Neural Information Processing Systems (Neural Information Processing Systems Foundation, 2004), pp. 393–400.

2. A. Donate, G. Dahme, and E. Ribeiro, “Classification of textures distorted by water waves,” in Proceedings of International Conference on Pattern Recognition (IEEE, 2006), pp. 421–424.

3. Z. Wen, A. Lambert, D. Fraser, and H. Li, “Bispectral analysis and recovery of images distorted by a moving water surface,” Appl. Opt. 49(33), 6376–6384 (2010). [CrossRef]   [PubMed]  

4. Y. Tian and S. G. Narasimhan, “Seeing through water: image restoration using model-based tracking,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 2303–2310. [CrossRef]  

5. O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 1153–1160. [CrossRef]  

6. K. K. Halder, M. Paul, M. Tahtali, S. G. Anavatti, and M. Murshed, “Correction of geometrically distorted underwater images using shift map analysis,” J. Opt. Soc. Am. A 34(4), 666–673 (2017). [CrossRef]   [PubMed]  

7. A. V. Kanaev, W. Hou, S. R. Restaino, S. Matt, and S. Gładysz, “Restoration of images degraded by underwater turbulence using structure tensor oriented image quality (STOIQ) metric,” Opt. Express 23(13), 17077–17090 (2015). [CrossRef]   [PubMed]  

8. L. Li, Q. Wang, and Z. Xiao, “Underwater Image Restoration Algorithm from Distorted Video,” J. Syst. Simul. 24(1), 188–196 (2012).

9. R. Zhang, D. He, Y. Li, L. Huang, and X. Bao, “Synthetic imaging through wavy water surface with centroid evolution,” Opt. Express 26(20), 26009–26019 (2018). [CrossRef]   [PubMed]  

10. C. Cox and W. Munk, “Slopes of the sea surface deduced from photographs of sun glitter,” Bull. Scripps Inst. Oceanogr. 6, 401–479 (1956).

11. Z. Wen and D. Fraser, A. lambert and H. Li, “Reconstruction of Underwater Image by Bispectrum,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2007), pp. 545–548.

12. Z. Wen, A. Lambert, and D. Fraser, “Reconstruction of Imagery Reflected from Water Surface,” inSignal Recovery and Synthesis (OSA, 2007), pp. 18–20.

13. N. Paul, A. de Chillaz, and J.-L. Collette, “On-line restoration for turbulence degraded video in nuclear power plant reactors,” Signal Image Video Process. 9(3), 601–610 (2015). [CrossRef]  

14. H. Murase, “Surface shape reconstruction of a nonrigid transparent object using refraction and motion,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 1045–1052 (1992). [CrossRef]  

15. D. M. Milder, P. W. Carter, N. L. Flacco, B. E. Hubbard, N. M. Jones, K. R. Panici, B. D. Platt, R. E. Potter, K. W. Tong, and D. J. Twisselmann, “Reconstruction of through-surface underwater imagery,” Waves Random Complex Media 16(4), 521–530 (2006). [CrossRef]  

16. T. Treibitz, Y. Y. Schechner, and H. Singh, “Flat refractive geometry,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008).

17. D. G. Turlaev and L. S. Dolin, “On Observing Underwater Objects through a Wavy Water Surface: A New Algorithm for Image Correction and Laboratory Experiment,” Izv., Atmos. Ocean. Phys. 49(3), 339–345 (2013). [CrossRef]  

18. Y. Tian and S. G. Narasimhan, “Globally Optimal Estimation of Nonrigid Image Distortion,” Int. J. Comput. Vis. 98(3), 279–302 (2012). [CrossRef]  

19. Y. Tian and S. G. Narasimhan, “Theory and Practice of Hierarchical Data-driven Descent for Optimal Deformation Estimation,” Int. J. Comput. Vis. 115(1), 44–67 (2015). [CrossRef]  

20. K. Seemakurthy and A. N. Rajagopalan, “Deskewing of Underwater Images,” IEEE Trans. Image Process. 24(3), 1046–1059 (2015). [CrossRef]   [PubMed]  

21. K. K. Halder, M. Tahtali, and S. G. Anavatti, “Simple and efficient approach for restoration of non-uniformly warped images,” Appl. Opt. 53(25), 5576–5584 (2014). [CrossRef]   [PubMed]  

22. K. K. Halder, M. Tahtali, and S. G. Anavatti, “An Artificial Neural Network Approach for Underwater Warp Prediction,” in Proceedings of Hellenic Conference on Artificial Intelligence (ACM, 2014), pp. 384–394. [CrossRef]  

23. A. V. Kanaev, J. Ackerman, E. Fleet, and D. Scribner, “Imaging Through the Air-Water Interface,” in Proceedings of Computational Optical Sensing and Imaging (OSA, 2009), pp. 13–15.

24. A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012). [CrossRef]  

25. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

26. D. S. C. Biggs and M. Andrews, “Acceleration of iterative image restoration algorithms,” Appl. Opt. 36(8), 1766–1775 (1997). [CrossRef]   [PubMed]  

27. D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid Registration Using Free-Form Deformations: Application to Breast MR Images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999). [CrossRef]   [PubMed]  

28. S. Lee, G. Wolberg, and S. Y. Shin, “Scattered data interpolation with multilevel B-splines,” IEEE Trans. Vis. Comput. Graph. 3(3), 228–244 (1997). [CrossRef]  

29. J. V. Hajnal, N. Saeed, E. J. Soar, A. Oatridge, I. R. Young, and G. M. Bydder, “A Registration and Interpolation Procedure for Subvoxel Matching of Serially Acquired MR Images,” J. Comput. Assist. Tomogr. 19(2), 289–296 (1995). [CrossRef]   [PubMed]  

30. S. Gabarda and G. Cristóbal, “Blind image quality assessment through anisotropy,” J. Opt. Soc. Am. A 24(12), B42–B51 (2007). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Simple block diagram of the proposed image restoration method.
Fig. 2
Fig. 2 Sample frame and mean of each data set.
Fig. 3
Fig. 3 The mean results of the two methods after different iterations of the registration process. The regions with larger distortions are marked with red rectangles.
Fig. 4
Fig. 4 Sample frame and mean of our data set.
Fig. 5
Fig. 5 The mean results of the two methods with our data set. The regions with larger distortions are marked with red rectangles.
Fig. 6
Fig. 6 The mean after applying five iterations of registration. (a) Use the mean without deblurring as the reference (standard mean-to-frame registration). (b) Use the mean deblurred by the blind deconvolution as the reference.
Fig. 7
Fig. 7 Mean results with different image quality metrics.

Tables (3)

Tables Icon

Table 1 The complete restoration strategy is summarized in algorithm 1.

Tables Icon

Table 1 Number of Iterations When the Robust Registration Process Stops

Tables Icon

Table 2 Comparison of Image Quality Metrics among the Proposed Method, the Oreifej method [5] and the Tian method [4]a

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

SSIM(I,M)= ( 2 μ I μ M + C 1 )( 2 σ IM + C 2 ) ( μ I 2 + μ M 2 + C 1 )+( σ I 2 + σ M 2 + C 2 ) ,
f k+1 = f k ( h k g h k f k )Ψ( f k ),
b k = a k + λ k δ k
T(x,y)= m=0 3 l=0 3 B m (v) B l (u) ϕ i+l,j+m ,
E smooth = 1 A 0 X 0 Y [ ( 2 T x 2 ) 2 + ( 2 T y 2 ) 2 +2 ( 2 T xy ) 2 ] dxdy,
E similarity = (x,y)Ω ( I A ( x,y ) I B ( x,y ) ) 2 w×h ,
E(Φ)= E similarity +α E smooth .
l 1 (V)= k (x,y) | M(x,y) I k (x,y) | w×h×n .
MSE= (x,y) ( M(x,y)G(x,y) ) 2 w×h ,
PSNR=10 log 10 ( ( MAX(G) ) 2 MSE ),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.