Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Synthetic imaging through wavy water surface with centroid evolution

Open Access Open Access

Abstract

Imaging through a wavy water surface is a challenging task, as the wavy water surface introduces anisoplanatism effects difficult to model and track. A typical recovery method is usually involving multiple-stage processing on a pre-acquired image sequence. A new progressive restoration scheme is demonstrated, it can run simultaneously with image acquisition and mitigate both distortion and blur progressively. This method extends the anisotropic evolution in lucky region fusion with a novel progressive optical flow based de-warping scheme, centroid evolution. A comparison has been made with other state-of-art techniques, the proposed method can create comparable results, even with much less frames acquired. Experiments with real through-water scenes have also proved the effectiveness of the method.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Imaging through wavy water surface is highly required in military applications like virtual periscope, where submerged cameras are used to image objects above water [1–6]. Unlike other underwater imaging systems, the difficulties in such imaging scenarios mainly come from the water-air-interface. This irregular surface introduces anisoplanatism effects to the image, creates scene distortion, defocusing, double image, also black holes due to internal total reflection when imaging airborne scene from below the water [4], illumination caustic when imaging underwater scene from above the water [7]. Compare to the long-range imaging through turbulent atmosphere, distortions in these scenarios are much stronger and mixed with more complicated aberrations.

One way to approach the problem is by modelling the water surface. Milder et al. [5] propose a method to estimate the water surface by analyzing the sky brightness, in their image system an upward looking submerged camera is used to capture the panoramic above surface scene. The incident light from above will gradually extinct as surface normal deviate from the viewing line of sight. Assuming it’s completely dark underwater and the sky is cloudless and uniform, the brightness of the sky thus determinates the surface radial slope, they use harmonic wave model to estimate the water surface, and inverse ray tracing to reconstructed undistorted image. Alterman et al. [6] add an additional imaging sensor, STELLA MARIS (STELLAr MArine Refractive Imaging Sensor), to measure the wavy water surface in real-time. It works like an adaptive optics system but utilizing the sun as guide star. The wave front sensor consists of a pinhole array imaging the sun onto a diffuser plane right behind it and a camera capturing the distribution of images of the sun on the diffuser plane. These pinholes have extremely narrow field of view, so the water surface on their own line of sight is small enough to be assumed isoplanatic. Then sampled normal vectors of the water surface are deduced from the position of each sun image through pinholes and used to estimate water surface. A method using polarimetric imaging to recover the water surface slope is proposed in [8,9], however no recovered image result has been presented yet.

Another typical approach is utilizing an image sequence of the same stationary scene to reconstruct the undistorted image. In early research, a method to divide image frame into overlapping sub-patches is widely used [10–13], which is followed by lucky patch selection in time [10–12] or fusion in bispectral domain [13]. The limit of these methods lies in that they are assuming the corresponding patches from different time are spatially invariant, so they can’t tackle strong waves with higher amplitude. Tian et al. [14] propose a distortion simulator based on a compact distortion model, in which the warping coefficients are optimized through a special tracking scheme, then distorted frames can be de-warped using the distortion model. Oreifej et al. [15] propose a two-stage reconstruction approach. First a robust non-rigid registration approach is employed, where pre-blurred frame sequence are iteratively updated and registerred to its average frame. Then rank minimization is employed on the registered frames to remove remaining sparse noise. Another distortion mitigation approach, FRTAAS (First Register Then Average And Subtract), is described in [16–19]. In this scheme one typical frame is selected as regestration reference frame to estimate shift maps from reference frame to the rest frames, then average the shift maps and use it to warp the reference frame to the centroid of the video frames. This method is faster and more robust than registering all frames in video sequence to its average frame. More recently, Li et al. [20] demonstrate their trained convolution neural network that can undistort dynamic refraction using a single image.

Tahtali et al. [21,22] have proposed several concepts of progressive restoration of nonuniformly distorted image sequence. FRTAAS2 is proposed in [21], replacing the frame registration scheme in FRTAAS with a frame-by-frame shift maps accumulation scheme. This improved method removes the need of a reference frame for registration, and thus may potentially be better suited for real-time implementation. The restoration of the distorted image sequence still depends on a reference frame, which has to be the first frame in the image sequence, while in FRTAAS usually a “lucky frame” is chosen as the reference frame. Frame distortion prediction using Kalman filter is demonstrated in [22], and it brings about several researches on distortion prediction later [23–25]. Distortion prediction ahead of time would be perfect for a progressive restoration method, but we think these methods are still not robust enough to be applicable in a through-water-surface imaging scenario.

In this paper, a simple but efficient progressive restoration approach is proposed, it is rather different from the progressive restoration concepts described above, but based on a synthetic imaging technique “lucky region fusion” [26–30], which was first developed to mitigate anisoplanatic degradation from turbulent atmosphere. We’re trying to bring this approach to the through-water-surface imaging scenario, where the distortion is inherently stronger and many other attempts to adapt the “lucky imaging” aren’t making progress at this point. A novel progressive de-warping scheme based on optical flow, centroid evolution, is integrated into the anisotropic gain evolution. Compare to all previous image-sequence-based methods above, our method can also reduce blur, which is usually requiring an additional stage in other method; and our method can run simultaneously with image acquisition. It can potentially be implemented in real-time applications. The method is tested and analyzed with data from [14] and water tank experiments in the water-to-air imaging scenario [31].

2. Problem setup

Consider imaging a still scene through wavy water surface, a consecutive image sequence{I(n)(r)}is acquired, wherer=(x,y)denotes the coordinates vector on image. Our goal is to restore the stationary imageIS(r), captured at an ideal time the water surface is perfectly still. We formulate a randomly distorted image as below:

I(n)(r)=(H*IS)(r+Wn(r))+noise,
whereWn()is a warping field representing the random distortion at the n-th frame, at pixel level, it shifts points from an image to another; at frame level, it geometrically warps an image to another. In fact, the wavy water surface may produce multiple images of one object through different paths, occlude objects, or even create holes due to total internal reflection, our simple model does not take these into account, however our tests show that the lucky region fusion has some ability to counter these effects. H is anisotropic blurring kernel representing motion blur and random anisoplanatic aberrations caused by the irregular water surface. Additionally, we assume the average warping field over time approximates zero vector as the period getting long enough.

3. Proposed method

The objective of this method is to generate a fusion image sequence{IF(n)(r)}simultaneously with the input sequence{I(n)(r)},and progressively approach the stable imageISin Eq. (1). The first output fusion frame is directly taken from the first input frame, then the recursion starts as illustrated in Fig. 1: During each recursion step, a new frameI(r)is captured, then current fusion frameIF(r)and the current input frameI(r)will warp accordingly through centroid evolution, then synthesized the next fusion frame through image-quality-assessment-driven anisotropic evolution. Details are described in the following sub-sections.

 figure: Fig. 1

Fig. 1 Recursive processing workflow.

Download Full Size | PDF

3.1 Image-quality-assessment-driven anisotropic evolution

The central idea of lucky region fusion is not attempting to estimate H or W in Eq. (1) directly, but employ an anisotropic evolution equation driven by image quality assessment to gradually mitigate them. Image quality mapM(r)is introduced in [26,27] as below to analyze local image quality of an image:

M(r)=J(r')G(rr')d2r',
where the criterionJ(r)is based on the gradient [30]J(r)=|I(r,t)|/I(r,t)d2r, and convolution kernel G is given as Gaussian kernel with a predefined kernel size σ:

Gσ(r)=exp{|r|22σ2}.

The anisotropic evolution equation is proposed in [27] as below:

IF(r,t)t=Kδ(r,t)(IF(r,t)I(r,t)),
whereIF(r,t)is the fusion image, K is a scalar coefficient controls the global strength of the anisotropic gain, here we simply useK=1/max(r,t)(δ(r,t))to normalize the anisotropic gain throughout the whole image and the time [30].δ(r,t)represents the anisotropic gain, which is a weight mask selecting regions with higher quality in the upcoming frame, it’s given as

δ(r,t)={M(r,t)MF(r,t),0,M(r,t)>MF(r,t)otherwise.

TheM(r,t)andMF(r,t)are the image quality map ofI(r,t)andIF(r,t). Now consider the discrete version of Eq. (4) applying on the image sequence{I(n)(r)}:

{IF(0)(r)=I(0)(r)IF(n)(r)=(1Kδn)IF(n1)(r)+KδnI(n)(r),n=1,2,3,

A new image sequence{IF(n)(r)}is created by selecting regions with higher image quality from{I(n)(r)}and then overlaying onto the former frame.

3.2 Centroid evolution

There’s an implicit assumption behind Eq. (6) that the two imagesIF(n1)andI(n)has to be stably registered, however in a through-water-surface imaging scenario, such conditions are hardly met. So the recursive equation is re-written here with warping fieldUn,Vnapplied ontoIF(n1)andI(n)respectively, registering these two frames to a virtual centroid frame:

{IF(0)(r)=I(0)(r)IF(n)(r)=(1Kδn)IF(n1)(r+Un(r))+KδnI(n)(r+Vn(r)),n=1,2,3,

Now we need to construct the warping field sequence{Un(r)}and{Vn(r)},with the objective that every point onIF(n1)andI(n)being shifted to the centroid of its point set across the time.

Giving an arbitrary pointPnon the imagery{I(n)}that shifts from frame to frame, it has a stationary positionPS. Assume thatCnis the centroid of the point setPnacross the image sequence, according to the assumption in section 2, pointCnis approaching pointPSas n getting larger. Our idea is to track this pointCn, and let it lead us to the stationary pointPS. From the starting points in Fig. 2, we can easily deduce from centroid formula:

 figure: Fig. 2

Fig. 2 Centroid point Cn being shifted by newly added point Pn.

Download Full Size | PDF

C0C1=12C0P1,C1C2=13C1P2,

A pattern has shown that newly added point is pulling the centroid towards it with a gradually reduced strength, a generalized formula is written as:

Cn1Cn=1n+1Cn1Pn.

There’re many ways to prove the Eq. (9), one of the straightforward is substituting the vectors with their coordinates. Since the point P is an arbitrary point on the image, Eq. (9) applies to all pixels on the image, thus can be generalized to the warping fields. So we first estimate the warping fieldωn(r)fromIF(n1)toI(n)using a coarse-to-fine optical flow algorithm implemented in [32,33], then use a simple fixed-point approach [34] to estimate the backward flowωn*(r)from forward flowωn(r). The warping fieldUn(r)andVn(r)are thus given as follows:

Un(r)=1n+1ωn(r),Vn(r)=nn+1ωn*(r).

4. Experiments

We coded the our image recovering scheme described above in MATLAB, and tested it with the image data provided in [14], since a thorough comparison has been made using the same data sets in [19] with several state-of-art techniques, we listed our recovered results and made a comparison with them using SSIM (structural similarity index measure) [35] and PSNR (peak-signal-to-noise ratio). Then two typical real scene sequences [31] with different kinds of degradation pattern are tested, recovered results are presented and analyzed.

4.1 Test with water-distorted imageries

Four image sets, “bricks”, “tiny fonts”, “small fonts” and “big fonts”, containing 61 frames in each sequence are used in the test. We use gaussian kernels with different kernel sizes in these 4 imageries respectively, also a box shape vignette is applied onJ(r)to darken the pixels on the border before the convolution in Eq. (2) to suppress artifacts caused by the border.

The restored image results are shown in Fig. 3. Analyzing the fusion sequence in Fig. 3 frame by frame, we can see that distortion is mitigated rapidly, the fusion sequences almost get stabled within 30 input frames, while the anisotropic gain keeps finding sharper regions from new frames. We also find that registration artifacts are popping out randomly then soon be suppressed by anisotropic gain, but there’re a few stubborn ones dominate the local anisotropic gain rejecting upcoming good patches.

 figure: Fig. 3

Fig. 3 Image restoration results with the data from [14]. The first column is undistorted image; the second column is the first distorted frame; the 3th–5th columns are the fusion frames in sequence{IF(n)(r)}created by our recursive method (see Visualization 1).

Download Full Size | PDF

Since [10] has provided undistorted reference frame for each data set, we can analyze the recovered image quality objectively. Two well-known image quality metrics SSIM and PSNR are used to analyze the quality of the fusion image sequence{IF(n)(r)}frame by frame, results are shown in Fig. 4(a)–4(d). It shows the evolution of the fusion image, SSIM and PSNR are increasing with minor oscillations.

 figure: Fig. 4

Fig. 4 SSIM and PSNR plot over time of the four image sets. Red line denotes the original image sequence {I(n)(r)}, blue line denotes the new fusion image sequence {IF(n)(r)} generated by our method.

Download Full Size | PDF

It should be noted that the method proposed in [15,19] are aiming at restoring the whole undistorted image sequence, while our method is only trying to recover a fusion image progressively. To put these results in a same chart, the recovered image sequences in [15,19] are fused into an average frame. The comparison has been listed in Table 1. As can be seen, our method starts creating good results with only 30 frames input, then slightly oscillate down, but with all the 60 frames being processed, the SSIM and PSNR become higher than that of [15,19]’s.

Tables Icon

Table 1. SSIM and PSNR comparison with [15,19].

4.2 Test with real though-water scenes

The real scene experiment is setup in a water-to-air imaging scenario. We put an upward looking underwater camera in a water tank as shown in Fig. 5, the camera can rotate vertically and horizontally, providing a flexible and stable platform to view the surrounding buildings and road lamps. Two data sets are acquired with our submerged camera viewing the surrounding scenes in Fig. 6 (data are provided in [31]). The first image sequence is the scene in Fig. 6(a) distorted by gravity wave, global oscillation is a major problem, also heavy motion blur dominates most areas on each frame. The second image sequence is the scene in Fig. 6(b) distorted by wind-generated capillary wave, small-scale local distortions vary rapidly and destroy details.

 figure: Fig. 5

Fig. 5 Submerged camera setup in a water tank.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 The surrounding scenes viewed directly from the position where the water tank located. (a) The building is about 70m away, this scene is used in the first image sequence in our test; (b) The lamp is about 5m away and the building in the background is about 40m away, this scene is used in the second image sequence in our test.

Download Full Size | PDF

When dealing with the real scenes, a problem comes about: Gradient-based image quality assessment doesn’t penalize noise, while newly input frames always appear noisier than the fusion frame. Thus, a non-local means denoise process [36] has been added right before image quality assessment. The first 100 frames in the two data sets are used in our test, restored results are presented side-by-side with distorted input frames in Fig. 7 and Fig. 8. The global oscillation in the first image sequence is getting weaker in Fig. 7, and the strong motion blur is absent from our restored results. Distortion in the second image sequence is mitigated rapidly and mostly eliminated within 10 frames in Fig. 8.

 figure: Fig. 7

Fig. 7 Left: the last frame in original image sequence; Right: the last frame in restored image sequence (see side-by-side comparison of the original and restored image sequence in Visualization 2).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 The first 12 frames in restored image sequence (see side-by-side comparison of the original and restored image sequence in Visualization 3).

Download Full Size | PDF

5. Discussion

A new progressive distortion mitigation method is proposed in this paper. Different from all the other distortion mitigation method, our method is aiming at simultaneously processing with the image acquisition. A novel distortion mitigation scheme, centroid evolution, is proposed. Similar to [15] and [19], we’re all trying to warp the distorted frames to a never existed centroid frame [15] suggests registering pre-blurred frames to the average frame, then use the estimated warping vector fields to warp the distorted frames, since the blurred frames won’t create accurate estimation, they have to do it iteratively [19] suggests selecting a particular frame as reference, and registering the rest frames to it, then average and invert the estimated warping vector fields to warp the reference frame to the centroid, this method successfully avoid using the blurry average frame. Our method is not attempting to resolve the big problem with mass numbers of frames at a time, but starting with warping two frames to their centroid, then track where the centroid’s going when adding new frame to the image set one by one. This distortion mitigation scheme coincides with another widely used turbulence mitigation method, lucky region fusion, and they cooperate well in our tests. Lucky region fusion is good at tackling atmosphere turbulence where distortion is not strong, centroid evolution compensate the displacement in two frames before the fusion, meanwhile progressively pushing every point on the fusion frame to the centroid.

We have demonstrated successful recovery results with both water-distorted imageries and real through-water scenes. But we’ve also noticed several drawbacks during our test: Sometimes registration errors can create very sharp edges, they pop out and dominate local areas rejecting upcoming good patches; gradient-based image quality metric does not penalize noise, lucky regions in the fusion frame sometimes may be overlaid with blurry but noisy upcoming frames. We’re investigating several improvements may added to our method, such as refining the anisotropic gain with a structure tensor-oriented kernel [37], implementing more robust image quality assessment that can penalize noise.

Acknowledgments

The authors thank Su Li, Zhenhua Lyu, Shaojie Zhou, Anjie Wang, and Liping Xiao for their help with the experiment.

References

1. R. E. Potter, “Observations from below a rough water surface to determine conditions of or above the surface waves,” (August 22, 1996).

2. M. Alterman, Y. Y. Schechner, P. Perona, and J. Shamir, “Detecting motion through dynamic refraction,” IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 245–251 (2013). [CrossRef]   [PubMed]  

3. M. Alterman, Y. Y. Schechner, and Y. Swirski, “Triangulation in random refractive distortions,” in IEEE International Conference on Computational Photography (ICCP) (2013), pp. 1–10.

4. H. Suiter, N. Flacco, P. Carter, K. Tong, R. Ries, and M. Gershenson, “Optics near the Snell angle in a water-to-air change of medium,” in Oceans 2007 (IEEE, 2007), pp. 1–12.

5. D. M. Milder, P. W. Carter, N. L. Flacco, B. E. Hubbard, N. M. Jones, K. R. Panici, B. D. Platt, R. E. Potter, K. W. Tong, and D. J. Twisselmann, “Reconstruction of through-surface underwater imagery,” Waves Random Complex Media 16(4), 521–530 (2006). [CrossRef]  

6. M. Alterman, Y. Swirski, and Y. Y. Schechner, “STELLA MARIS: stellar marine refractive imaging sensor,” in 2014 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2014), pp. 1–10. [CrossRef]  

7. Y. Y. Schechner and N. Karpel, “Attenuating natural flicker patterns,” in Oceans ’04 MTS/IEEE Techno-Ocean ’04 (IEEE Cat. No.04CH37600) (2004), Vol. 3, pp. 1262–1268.

8. H. Schultz and A. Corrada-Emmanuel, “System and method for imaging through an irregular water surface,” (December 2009).

9. P. H. Barsic and C. R. Chinn, “Sea surface slope recovery through passive polarimetric imaging,” in 2012 Oceans (IEEE, 2012), pp. 1–9.

10. A. Efros, V. Isler, J. Shi, and M. Visontai, “Seeing through water,” in Advances in Neural Information Processing Systems (2005), pp. 393–400.

11. A. Donate, G. Dahme, and E. Ribeiro, “Classification of textures distorted by waterwaves,” in 18th International Conference on Pattern Recognition (ICPR’06) (2006), Vol. 2, pp. 421–424. [CrossRef]  

12. A. Donate and E. Ribeiro, “Improved reconstruction of images distorted by water waves,” in Advances in Computer Graphics and Computer Vision, Communications in Computer and Information Science (Springer Berlin Heidelberg, 2007), pp. 264–277.

13. Z. Wen, A. Lambert, D. Fraser, and H. Li, “Bispectral analysis and recovery of images distorted by a moving water surface,” Appl. Opt. 49(33), 6376–6384 (2010). [CrossRef]   [PubMed]  

14. Y. Tian and S. G. Narasimhan, “Seeing through water: image restoration using model-based tracking,” in 2009 IEEE 12th International Conference on Computer Vision (IEEE, 2009), pp. 2303–2310. [CrossRef]  

15. O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in CVPR 2011 (IEEE, 2011), pp. 1153–1160.

16. K. K. Halder, M. Tahtali, and S. G. Anavatti, “Simple and efficient approach for restoration of non-uniformly warped images,” Appl. Opt. 53(25), 5576–5584 (2014). [CrossRef]   [PubMed]  

17. M. Tahtali, D. Fraser, and A. Lambert, “Restoration of non-uniformly warped images using a typical frame as prototype,” in TENCON 2005 - 2005 IEEE Region 10 Conference (IEEE, 2005), pp. 1–6.

18. K. K. Halder, M. Tahtali, and S. G. Anavatti, “Geometric correction of atmospheric turbulence-degraded video containing moving objects,” Opt. Express 23(4), 5091–5101 (2015). [CrossRef]   [PubMed]  

19. K. K. Halder, M. Paul, M. Tahtali, S. G. Anavatti, and M. Murshed, “Correction of geometrically distorted underwater images using shift map analysis,” J. Opt. Soc. Am. A 34(4), 666–673 (2017). [CrossRef]   [PubMed]  

20. Z. Li, Z. Murez, D. Kriegman, R. Ramamoorthi, and M. Chandraker, “Learning to see through turbulent water,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (2018), pp. 512–520. [CrossRef]  

21. M. Tahtali, A. J. Lambert, and D. Fraser, “Restoration of nonuniformly warped images using accurate frame by frame shiftmap accumulation,” Proc. SPIE 6316, 631603 (2006). [CrossRef]  

22. M. Tahtali, A. J. Lambert, and D. Fraser, “Progressive restoration of nonuniformly warped images by shiftmap prediction using Kalman filter,” in Adaptive Optics: Analysis and Methods/Computational Optical Sensing and Imaging/Information Photonics/Signal Recovery and Synthesis Topical Meetings on CD-ROM (2007) (Optical Society of America, 2007), p. SMC5.

23. M. Tahtali, A. Lambert, and D. Fraser, “Self-tuning Kalman filter estimation of atmospheric warp,” Proc. SPIE 7076, 70760F (2008). [CrossRef]  

24. M. Tahtali and A. Lambert, “Statistical turbulence approach to the covariance matrices in the shiftmap prediction using Kalman filter,” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest (2009) (Optical Society of America, 2009), p. STuC2.

25. K. K. Halder, M. Tahtali, and S. G. Anavatti, “Model-free prediction of atmospheric warp based on artificial neural network,” Appl. Opt. 53(30), 7087–7094 (2014). [CrossRef]   [PubMed]  

26. G. W. Carhart and M. A. Vorontsov, “Synthetic imaging: nonadaptive anisoplanatic image correction in atmospheric turbulence,” Opt. Lett. 23(10), 745–747 (1998). [CrossRef]   [PubMed]  

27. M. A. Vorontsov, “Parallel image processing based on an evolution equation with anisotropic gain: integrated optoelectronic architectures,” J. Opt. Soc. Am. A 16(7), 1623–1637 (1999). [CrossRef]  

28. M. A. Vorontsov and G. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A 18(6), 1312–1324 (2001). [CrossRef]   [PubMed]  

29. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M. T. Valley, “Video enhancement through automated lucky-region fusion from a stream of atmospherically-distorted images,” in Computational Optical Sensing and Imaging (Optical Society of America, 2009), p. CThC3.

30. M. Aubailly, M. Vorontsov, G. W. Carhart, and M. T. Valley, “Automated video enhancement from a stream of atmospherically- distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009). [CrossRef]  

31. R. Zhang, “Imaging through wavy water surface,” (2018). https://doi.org/10.6084/m9.figshare.6843875.v1.

32. A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods,” Int. J. Comput. Vis. 61(3), 211–231 (2005). [CrossRef]  

33. C. Liu, “Beyond pixels: exploring new representations and applications for motion analysis,” Massachusetts Institute of Technology (2009).

34. M. Chen, W. Lu, Q. Chen, K. J. Ruchala, and G. H. Olivera, “A simple fixed-point approach to invert a deformation field,” Med. Phys. 35(1), 81–88 (2007). [CrossRef]   [PubMed]  

35. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

36. A. Buades, B. Coll, and J. Morel, “A non-local algorithm for image denoising,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) (2005), Vol. 2, pp. 60–65.

37. A. V. Kanaev, W. Hou, S. R. Restaino, S. Matt, and S. Gładysz, “Restoration of images degraded by underwater turbulence using structure tensor oriented image quality (STOIQ) metric,” Opt. Express 23(13), 17077–17090 (2015). [CrossRef]   [PubMed]  

Supplementary Material (3)

NameDescription
Visualization 1       Image restoration results with water-distorted imageries
Visualization 2       side-by-side comparison of our image restoration results and original distorted image sequense
Visualization 3       side-by-side comparison of our image restoration results and original distorted image sequense

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Recursive processing workflow.
Fig. 2
Fig. 2 Centroid point C n being shifted by newly added point P n .
Fig. 3
Fig. 3 Image restoration results with the data from [14]. The first column is undistorted image; the second column is the first distorted frame; the 3th–5th columns are the fusion frames in sequence { I F ( n ) ( r ) }created by our recursive method (see Visualization 1).
Fig. 4
Fig. 4 SSIM and PSNR plot over time of the four image sets. Red line denotes the original image sequence { I ( n ) ( r ) }, blue line denotes the new fusion image sequence { I F ( n ) ( r ) } generated by our method.
Fig. 5
Fig. 5 Submerged camera setup in a water tank.
Fig. 6
Fig. 6 The surrounding scenes viewed directly from the position where the water tank located. (a) The building is about 70m away, this scene is used in the first image sequence in our test; (b) The lamp is about 5m away and the building in the background is about 40m away, this scene is used in the second image sequence in our test.
Fig. 7
Fig. 7 Left: the last frame in original image sequence; Right: the last frame in restored image sequence (see side-by-side comparison of the original and restored image sequence in Visualization 2).
Fig. 8
Fig. 8 The first 12 frames in restored image sequence (see side-by-side comparison of the original and restored image sequence in Visualization 3).

Tables (1)

Tables Icon

Table 1 SSIM and PSNR comparison with [15,19].

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I ( n ) ( r )=( H* I S )( r+ W n ( r ) )+noise,
M( r )= J( r' )G( rr' ) d 2 r' ,
G σ ( r )=exp{ | r | 2 2 σ 2 }.
I F ( r,t ) t =Kδ( r,t )( I F ( r,t )I( r,t ) ),
δ( r,t )={ M( r,t ) M F ( r,t ), 0, M( r,t )> M F ( r,t ) otherwise.
{ I F ( 0 ) ( r )= I ( 0 ) ( r ) I F ( n ) ( r )=( 1K δ n ) I F ( n1 ) ( r )+K δ n I ( n ) ( r ),n=1,2,3,
{ I F ( 0 ) ( r )= I ( 0 ) ( r ) I F ( n ) ( r )=( 1K δ n ) I F ( n1 ) ( r+ U n ( r ) )+K δ n I ( n ) ( r+ V n ( r ) ),n=1,2,3,
C 0 C 1 = 1 2 C 0 P 1 , C 1 C 2 = 1 3 C 1 P 2 ,
C n1 C n = 1 n+1 C n1 P n .
U n ( r )= 1 n+1 ω n ( r ), V n ( r )= n n+1 ω n * ( r ).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.