Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image degradation and recovery based on multiple scattering in remote sensing and bad weather condition

Open Access Open Access

Abstract

The radiance received by the sensor is influenced by the atmospheric interaction including the effects of absorption and scattering. Based on the analysis of the radiance along the transmission path, we propose an image degradation model and a recovery method for remote sensing and bad weather condition in which the effect of multiple scattering cannot be ignored. Several real outdoor images are restored to verify the effectiveness of the proposed model and method. The results turn out to be significantly improved in contrast and sharpness.

©2012 Optical Society of America

1. Introduction

The quality of the images captured by the sensor in long distance can be dramatically degraded by the interaction of atmospheric particles, especially in poor weather condition. For example, Table 1 presents some contrast data influenced by atmospheric transmission. In the calculation, two groups of objects on the ground are chosen. The highest and lowest reflectance of the objects in group 1 is 0.9 and 0.09, respectively, while it is 0.9 and 0.009 in group 2. The radiance data are all computed by PcModwin (vision 3.7) software [1] under the condition that the observation height is 18 km, and the side-glance transmission distances are 50 km, 70 km and 100 km, respectively. The contrast is calculated by Eq. (1):

Contrast=(Ea+E0)(Eb+E0)(Ea+E0)+(Eb+E0),
where Ea + E0 denotes the radiance received by the sensor from the object with the highest reflectance, and Eb + E0 is that from the object with the lowest reflectance in the same group. Ea and Eb are the radiance coming out from the objects with the highest and lowest reflectance, respectively. E0 is the path radiance. As shown in the table, the contrast declines severely for distant objects due to atmospheric transmission. Therefore, to enhance the image quality, image recovery is necessary.

Tables Icon

Table 1. Contrast data influenced by atmospheric transmission

With the development of computer vision, more and more image recovery algorithms which are used to correct the influence of atmospheric transmission are proposed. Most of them are based on the widely used model expressed by Eq. (2) [211]:

I(i,j)=J(i,j)t(i,j)+A[1t(i,j)],
where the first term on the right side, i.e., J(i,j)t(i,j) is the direct attenuated radiance from the target object which represents the surface-leaving radiance. The second term A·[1-t(i,j)] describes the path radiance including the effects of molecular scattering, aerosol scattering, and the interaction between molecular and aerosol scattering. I(i,j) is the radiance received by the sensor, which is the only known term in Eq. (2). J(i,j), t(i,j) and A denote the target object radiance, the medium transmittance at the pixel location (i,j) and the sky radiance respectively. The operator is the component-wise multiplication.

In order to solve the problem, additional information [26] is required, such as the depth of the scene [2,4] or image sequences [5,6], which is not practical in remote sensing application. Therefore, the recent researches [710] focus on developing algorithms which adopt single image. Tan [7] optimizes a cost function in the framework of Markov random fields to improve the image quality, with the assumptions that the contrast of the refined image is higher compared with the input hazy image and the atmospheric light is constant in the local area. But the halo effect is serious at depth discontinuities and the output tends to be too saturated. Fattal [9] estimates the transmittance only assuming that the medium transmittance and the surface shading are locally statistically uncorrelated. This approach produces good results except for heavy haze pollution. He et al. [10] restore the input image according to the dark channel prior which is a statistic of outdoor haze-free images. However, the prior is not accurate when the scene object is inherently similar to the atmospheric light.

The model shown in Eq. (2) is derived from the Bouguer-Lambert-Beer law [11]. The law is based on the assumption that the effect of atmospheric scattering is not taken into consideration. However, when the target object is far away from the sensor, the influence caused by atmospheric scattering [12,13] becomes increasingly significant. So it cannot be ignored in remote sensing or extremely poor weather condition.

In this study, we propose an image degradation model and a recovery method which takes the atmospheric scattering into consideration for remote sensing images. The performance of our method is compared with that of another two methods to verify its effectiveness.

2. The theory and approach

2.1 The proposed model

It is widely believed that the radiance received by the sensor is composed of two parts, as shown in Fig. 1(a) . The solid line in Fig. 1(a) represents the radiance from the target object, and the dashed line denotes the sky radiance, both of which are attenuated through the atmospheric medium.

 figure: Fig. 1

Fig. 1 (a) The radiance arriving at the sensor. (b) The corresponding relationship between image pixels and object pixels.

Download Full Size | PDF

First of all, we analyze the radiance from the target object. Assuming the size of the CCD sensor is of r × c pixels, the detected target can be divided into r × c blocks correspondingly, illustrated in Fig. 1(b). So every image pixel in the sensor is related to an object pixel at the same position (i,j). However, because of the atmospheric interaction along the transmission path, the radiance leaving from an object pixel, indicated by fo(i,j), does not contribute to the corresponding image pixel entirely.

In the visible spectrum, the main influence of atmospheric interaction is atmospheric absorption and scattering. They influence the transmission of the radiance simultaneously. But to simplify the problem, they are supposed to be two independent processes, i.e., the influence resulted from atmospheric scattering is assumed to occur after the attenuation caused by atmospheric absorption.

The Bouguer-Lambert-Beer law states the relationship between the absorption and the properties of the medium through which the radiance is transmitted when scattering is ignored. Hence, the attenuation caused by the atmospheric absorption can be simulated by the Bouguer-Lambert-Beer law, which is expressed as:

po(i,j)=fo(i,j)t(i,j)=fo(i,j)exp[μL(i,j)],
where (i,j) represents the pixel position, po is attenuated radiance, t is the transmittance, μ is the absorption coefficient of the atmospheric medium which is assumed to be uniform, and L is the distance from the object to the sensor. To make it clear, we call the process shown in Eq. (3) the decrease effect.

Then the effect of the atmospheric scattering is taken into consideration, which includes single scattering and multiple scattering. However, single scattering is usually treated as a random phenomenon, while multiple scattering is deterministic because its randomness is averaged out by the large number of scattering events. Based on this analysis, the influence of atmospheric scattering is mainly determined by the multiple scattering along the transmission path.

Narasimhan et al. [12] define the glow of a point source caused by the multiple scattering as the atmospheric point spread function. Then with the assumption that an extended light source of arbitrary shape and size is made up of several isotropic source elements, they develop a physics-based model for the multiple scattering of the extended light source as follows:

I=I0APSF,
where I is the radiance coming out of the medium, I0 is the radiance of the extended light source. APSF is the atmospheric point spread function which is different for each pixel (i.e., space-variant), and stands for the convolution operator.

Therefore, during the atmospheric transmission, each object pixel can be regarded as a source element. Since the influence resulted from atmospheric scattering is assumed to occur after the attenuation caused by atmospheric absorption, it can be simulated by the convolution operation:

qo=poho,
where qo is the radiance received by the sensor, ho denotes the atmospheric point spread function (it will be discussed later in subsection 2.2.2). We call the process shown in Eq. (5) the dispersion effect.

Due to the dispersion effect, the radiance received by the sensor at position (i,j) is influenced by the pixels which are inside a local region centered at (i,j) in the object plane. Especially, for the pixels at the boundaries of the sensor, their radiance is also affected by the object pixels out of the view field. As shown in Fig. 1(b), taking the red pixel on the top boundary of the image plane as an example, its radiance includes the contributions from all the pixels inside the blue square of the object plane. Consequently, the captured image is derived from a larger object matrix through an aperture filter to deal with the boundaries, which is expressed by

qo|r×c=(po|(r+k1)×(c+k1)ho|k×k)n|r×c=[(fo|(r+k1)×(c+k1)t|(r+k1)×(c+k1))ho|k×k]n|r×c,
where po and fo are the (r + k-1) × (c + k-1) enlarged matrices of po and fo, respectively. Here, t is redefined as the transmittance associated with fo, and n is the rectangular window function with the same size r × c as the sensor. The width of the enlarged area at each side is (k-1)/2 as shown in Fig. 1(b), which is determined by the size of the dispersion kernel ho. Omitting the subscript, Eq. (6) can be rewritten as
qo=[(fot)ho]n,
which represents the whole process of decrease and dispersion effects.

For the sky radiance, it can be assumed as a uniform object with radiance fa. So the attenuation of fa is similar to that of the object radiance fo. Because of equivalence between the portion of the radiance from this uniform object that reaches the sensor and the portion of the radiance from the target object that missed by the sensor, the transmittance of fa is 1-t. Thus the path radiance received by the sensor, denoted by qa, is calculated by the following formula:

qa={[fa(1t)]ha}n,
where ha is the atmospheric point spread function of fa.

Additionally, the noise of CCD which is denoted by NCCD also affects the final captured image. Therefore, the total radiance g received by the sensor is the sum of the terms qo, qa and NCCD, as shown below:

g=qo+qa+NCCD=[(fot)ho]n+{[fa(1t)]ha}n+NCCD.

Equation (9) is our image degradation model. Comparing Eq. (9) with the widely used model in Eq. (2), we see that they both contain two parts including the surface-leaving radiance and path radiance. Meanwhile, the decrease effect of the radiance along the transmission path is also taken into consideration in both models. However, we model the dispersion effect by a space-variant convolution process for each pixel and a rectangular window filtering process for the entire image, because the influence of atmospheric scattering cannot be ignored in remote sensing or extremely poor weather condition. Moreover, Eq. (9) contains the noise of CCD, which is unavoidable in imaging.

2.2 The image recovery

The goal of image recovery is to restore the target object radiance fo by removing the atmospheric influence from the captured image g in Eq. (9). Figure 2 demonstrates the whole procedure of the image recovery with the assumption that the parameters t, fa, ho and ha are already estimated (the estimation of these parameters will be discussed later in subsections 2.2.1 and 2.2.2).

 figure: Fig. 2

Fig. 2 The procedure of image recovery.

Download Full Size | PDF

To deal with the boundary pixels in the image deconvolution of step 4, the image matrix g is first enlarged to g’, i.e., step 1 in Fig. 2. The pixel values of the enlarged area are supposed to equal the nearest array border value. Figure 3 gives an example of step 1, in which Fig. 3(a) is the original image, and Fig. 3(b) is its extended result by repeating the pixel values on the borders. But the pixels in the enlarged area are left untreated in steps 2-5. It is reasonable because these pixels are not included in the sensitive area of the sensor.

 figure: Fig. 3

Fig. 3 (a) The original image. (b) The enlarged image.

Download Full Size | PDF

After step 1, Eq. (9) is rewritten as follows:

g=(fot)ho+[fa(1t)]ha+NCCD.
then fo is to be restored from Eq. (10).

Because the kernel ho is space-variant for g’, the object radiance has to be restored pixel by pixel. Suppose that we want to recover the object radiance at position (i,j) inside the sensitive area of the sensor, which is represented by the red pixel in g’ in Fig. 2. We extract a local region S centers at position (i,j) (step 2) due to the dispersion effect discussed in subsection 2.1. S is of the same size as the atmospheric point spread function at position (i,j).

Then we compute the term G with Eq. (11) (i.e., step 3):

G=g[fa(1t)]ha,
so
fot=deconv(G,ho),
where the function deconv(·,·) stands for the regularized space-variant deconvolution operation of the two operands. We use the Wiener filtering [14] in our experiments, i.e., step 4.

Comparing Eq. (12) with Eq. (10), it may be confused that the noise term NCCD disappears. This is because the procedure of image deconvolution does not need this term, which is a random variable, to be known. NCCD is taken into account automatically in the regularized deconvolution algorithm [15] presented in Eq. (12).

Consequently, the object radiance is obtained by

fo=deconv(G,ho)max(t,t0).
where t0 (0<t0t) is a small constant to avoid the denominator to be zero.

In step 5, we select the center pixel of the region S (represented by the blue pixel in Fig. 2) and divide it by max(t,t0). Because the pixels in the enlarged area are not treated, the result is actually fo(i,j). After all the pixels in g are traversed, we obtain the object radiance fo.

Figure 4 is the flowchart for solving our degradation model. We employ existent methods in [10,13] whose results turn out to be robust to figure out the unknown parameters t, fa, ho and ha.

 figure: Fig. 4

Fig. 4 The flowchart for solving our degradation model.

Download Full Size | PDF

2.2.1 Estimation of the transmittance and sky radiance

The medium transmittance t and the sky radiance fa in our image degradation model are estimated with the method proposed by He et al. [10], which manages to remove the atmospheric influence from images based on the dark channel prior. The prior is a statistics of haze-free outdoor images that most local non-sky regions contain some pixels with very low intensities in at least one color channel, which can be described as:

minc{minyΩ[foc(y)]}=0,
where foc represents the r, g or b channel of the haze-free image fo, pixel y locates in the small local region Ω with pixel (i,j) at the center. The left term of Eq. (14) is called the dark channel of the image fo. In ref [10], the authors have proved that the dark channel prior is also adequate for the images with sky regions.

Figure 5(a) shows a haze-free outdoor image while Fig. 5(b) is a hazy one. Their corresponding dark channels are exhibited in Figs. 5(c) and 5(d). The size of the local region Ω should be properly set to cover the small objects whose radiance is inherently similar to the sky radiance, otherwise, the prior is invalid [10]. Obviously, the intensities of most pixels in Fig. 5(c) are low, while due to the influence of the sky radiance, they are much higher in Fig. 5(d), which is consistent with the dark channel prior.

 figure: Fig. 5

Fig. 5 (a) The haze-free outdoor image. (b) The hazy outdoor image. (c) The dark channel of (a). (d) The dark channel of (b).

Download Full Size | PDF

Consequently, according to the dark channel prior given in Eq. (14), the medium transmittance of the radiance from the target object is derived from Eq. (2) as follows:

t=1ωminc{minyΩ[gc(y)fac]},
where ω (0<ω≤1) is a constant to model the remaining haze for the distant objects [10].

Although the sky radiance fa depends heavily on the optical thickness, it can be obtained from the original image g which is the only given information. We extract the top 0.1% brightest pixels in the dark channel of g, among which the pixel with the highest intensity in g is selected as fa [10].

In order to remove the block effect in t as shown in Fig. 6(b) , the soft matting algorithm [10,16] is employed to refine the transmittance t:

(L+λU)t=λt,
where L is the Matting Laplacian matrix [16], U is an identity matrix with the same size as L, λ is a small value to constrain t’ which is the desired transmittance. The refined transmittance t’ is shown in Fig. 6(c). Hence, the transmittance of the uniform object is calculated by 1-t’. More details can be found in [10,16].

 figure: Fig. 6

Fig. 6 (a) The original degraded image. (b) The transmittance t calculated by Eq. (15). (c) The refined transmittance t’ obtained from Eq. (16).

Download Full Size | PDF

2.2.2 Estimation of the dispersion kernel

In [12], Narasimhan et al. discuss the atmospheric point spread function caused by multiple scattering and establish the relationship between the object radiance and the received radiance, which has been mentioned subsection 2.1.

Based on the result of Narasimhan et al., Metari and Deschênes [13] introduce the generalized Gaussian distribution to approximate the atmospheric point spread function, i.e.,

APSF(i,j;σ,T)=exp[(i2+j2)kT2|A(kT,σ)|kT]4Γ2(1+1kT)A(kT,σ)2,
where σ is related to the forward scattering parameter q (0≤q≤1) which can be determined from g. T is the optical thickness, k (k>0) is a constant related to T. Γ(·) is the gamma function, and A(·) is the scale parameter equal to
A(kT,σ)=[σ2Γ(1kT)/Γ(3kT)]1/2,
the readers can refer to [13] for more details about Eq. (17).

Because of the relationship between the optical thickness T and the optimized medium transmittance t’ which is described by the following equation:

T=lnt,
Equation (17) can be utilized to approximate the dispersion kernel ho. Similarly, ha is calculated by replacing t’ with 1-t’ in Eq. (17).

3. The results and comparison

To exhibit the effectiveness of our model presented by Eq. (9), we take several real outdoor images to implement the image recovery, including the method of He et al. [10], the method of Metari and Deschênes [13], and our approach. In the experiments, the value of t0 in Eq. (13) is set to 0.1, the size of the small local region Ω in Eq. (15) is 9 × 9 for all the tested images, and the constant ω is 0.7. In addition, λ equals 10−5 in Eq. (16), k and q in Eq. (17) is 0.5 and 0.7 respectively. The pixel values of the enlarged area in image g’ are assumed to equal the nearest array border value. The deconvolution in Eq. (12) is executed by the Wiener filtering in Matlab, the power spectrum ratio of the noise and the undegraded image in this algorithm is set to 0.02 which can be adjusted to suppress the amplified noise caused by the term NCCD.

Figure 7(a) shows one original degraded image taken from an aircraft. Figure 7(b) presents the output based on the dark channel prior employing the model shown in Eq. (2) (i.e., the result by He et al.). Figure 7(c) exhibits the final image by deconvolving the model in Eq. (4) pixel by pixel with Wiener filtering (i.e., the result by Metari and Deschênes). And Fig. 7(d) is the result obtained by solving our model. In total, the performances of all the methods are much better than the original one. More specifically, in Fig. 7(b), the contrast of the image is dramatically improved and the color information is also recovered, despite that the edges of the object are not well refined due to neglecting the multiple scattering. The edges are sharp and clear in Fig. 7(c), however, the final result is not significantly enhanced and the object is still difficult to distinguish. Our result achieves the best performance in both contrast and the sharpness of the object edges, as shown in Fig. 7(d).

 figure: Fig. 7

Fig. 7 Image recovery results. (a) Input degraded image. (b) The result by He et al.. (c) The result by Metari and Deschênes. (d) Our result.

Download Full Size | PDF

Other experiments are exhibited in Fig. 8 . Figures 8(a) and 8(e) are two original images both taken from the aircraft. Figure 8(i) is captured in a heavy haze day at the top of a hill. Figures 8(b), 8(f) and 8(j) are the corresponding results by He et al., and Figs. 8(c), 8(g) and 8(k) are the results by Metari and Deschênes. Figures 8(d), 8(h) and 8(l) are the results of our approach. Obviously, the quality of the output images obtained from our model is significantly enhanced with high contrast, vivid color information, and sharp edges of the object. Moreover, the proposed model works well not only for the remote sensing images (i.e., Figs. 8(a) and 8(e)), but also for the images captured in bad weather condition (i.e., Fig. 8(i)).

 figure: Fig. 8

Fig. 8 More results. First row: three test images. Second row: the corresponding results by He et al.. Third row: the corresponding results by Metari and Deschênes. Fourth row: the corresponding results of our approach.

Download Full Size | PDF

In order to compare the performances of the image recovery methods objectively, the Gray Mean Gradient (GMG) and Laplacian (LAP) image quality assessment methods [17] are used, and the results (larger values represent better image quality) are given in Table 2 . Obviously, the results obtained from the proposed model achieve the largest assessment values for all the test images, which indicates that our approach outperforms the other two.

Tables Icon

Table 2. Image quality assessment results for images in Figs. 7 and 8 (the number in bold denotes the largest value of each row)

4. Conclusion

We analyze the impact of the atmospheric transmission on the radiance detected by the sensor in remote sensing and bad weather condition, and propose an image degradation model and a recovery method taking multiple scattering into consideration. The radiance from the target object is decreased along the transmission path according to the Bouguer-Lambert-Beer law, and dispersed due to the multiple scattering. Because the sky radiance which enters into the sensor can be regarded as a uniform object, the attenuation analysis is the same as the target object. In order to verify the effectiveness of our model, we employ the existent algorithms to estimate the unknown parameters. Moreover, the performances of the proposed model and the widely used model in which multiple scattering effect is ignored are compared. Experimental results show that the images obtained from our model are significantly improved in contrast, clearness, color saturation and object edges. Besides, the GMG and LAP image quality assessment methods are used, and the values of the output images by our algorithm are the largest which indicates that the proposed model outperforms the widely used model.

Acknowledgments

We thank the anonymous reviewers for their valuable comments which help to improve this paper. This work is supported by Chinese National Natural Science Foundation (No. 60977010) and Chinese National Programs for High Technology Research and Development (No. 2009CB724006).

References and links

1. A. Berk, G. P. Anderson, P. K. Acharya, J. H. Chetwynd, L. S. Bernstein, E. P. Shettle, M. W. Matthew, and S. M. Adler-Golden, “Modtran4 user’s manual,” Air Force Research Laboratory, 1999.

2. J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998). [CrossRef]   [PubMed]  

3. K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18(10), 2460–2467 (2001). [CrossRef]   [PubMed]  

4. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).

5. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003). [CrossRef]  

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), 325–332.

7. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), 1–8.

8. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), 1984–1991.

9. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 72 (2008). [CrossRef]  

10. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), 1956–1963.

11. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), 598–605.

12. S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), 665–672.

13. S. Metari and F. Desch, ênes, “A new convolution kernel for atmospheric point spread function applied to computer vision,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), 1–8.

14. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Second Edition, (Publishing House of Electronics Industry, 2002), Chap. 5.

15. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (IOP, 1998), Chap. 5.

16. A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), 61–68.

17. W. Dong, Y. Chen, Z. Xu, H. Feng, and Q. Li, “Image stabilization with support vector machine,” J. Zhejiang Univ.-Sci. C Comput. & Electron. 12(6), 478–485 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 (a) The radiance arriving at the sensor. (b) The corresponding relationship between image pixels and object pixels.
Fig. 2
Fig. 2 The procedure of image recovery.
Fig. 3
Fig. 3 (a) The original image. (b) The enlarged image.
Fig. 4
Fig. 4 The flowchart for solving our degradation model.
Fig. 5
Fig. 5 (a) The haze-free outdoor image. (b) The hazy outdoor image. (c) The dark channel of (a). (d) The dark channel of (b).
Fig. 6
Fig. 6 (a) The original degraded image. (b) The transmittance t calculated by Eq. (15). (c) The refined transmittance t’ obtained from Eq. (16).
Fig. 7
Fig. 7 Image recovery results. (a) Input degraded image. (b) The result by He et al.. (c) The result by Metari and Deschênes. (d) Our result.
Fig. 8
Fig. 8 More results. First row: three test images. Second row: the corresponding results by He et al.. Third row: the corresponding results by Metari and Deschênes. Fourth row: the corresponding results of our approach.

Tables (2)

Tables Icon

Table 1 Contrast data influenced by atmospheric transmission

Tables Icon

Table 2 Image quality assessment results for images in Figs. 7 and 8 (the number in bold denotes the largest value of each row)

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

Contrast= ( E a + E 0 )( E b + E 0 ) ( E a + E 0 )+( E b + E 0 ) ,
I(i,j)=J(i,j)t(i,j)+A[1t(i,j)],
p o (i,j)= f o (i,j)t(i,j) = f o (i,j)exp[μL(i,j)],
I= I 0 APSF,
q o = p o h o ,
q o | r×c = ( p o | (r+k1)×(c+k1) h o | k×k ) n| r×c =[ ( f o | (r+k1)×(c+k1) t| (r+k1)×(c+k1) ) h o | k×k ] n| r×c ,
q o =[( f o t) h o ]n,
q a ={[ f a (1t)] h a }n,
g= q o + q a + N CCD =[( f o t) h o ]n+{[ f a (1t)] h a }n+ N CCD .
g =( f o t) h o +[ f a (1t)] h a + N CCD .
G= g [ f a (1t)] h a ,
f o t=deconv(G, h o ),
f o = deconv(G, h o ) max(t, t 0 ) .
min c { min yΩ [ f o c (y)]}=0,
t=1ω min c { min yΩ [ g c (y) f a c ]},
(L+λU) t =λt,
APSF(i,j;σ,T)= exp[ ( i 2 + j 2 ) kT 2 | A(kT,σ) | kT ] 4 Γ 2 (1+ 1 kT )A (kT,σ) 2 ,
A(kT,σ)= [ σ 2 Γ( 1 kT )/Γ( 3 kT )] 1/2 ,
T=ln t ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.