Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Performance bounds on synchronous laser line scan systems

Open Access Open Access

Abstract

In this article a performance bound is derived for feasible image resolution among a class of imaging systems that can be referred to as synchronous laser line scan systems. Most often, these systems use a narrow beam projected source (typically a laser) in conjunction with a very small field of view receiver that is synchronously scanned. Here, a bound on the maximum system resolution is derived when both source and receiver are “delta function like”. The bound demonstrates that the best achievable overall system point spread function is the square of the one way point spread function for the medium.

©2005 Optical Society of America

1. Introduction

One of the most difficult situations where imaging is attempted is when looking through turbid media. Motivated by the many applications which occur in medical, environmental, and the military there has been a prevailing need for either formulating better imaging geometries or understanding the limitations of the existing ones.

The achievable resolution in turbid media is typically limited by the severe scattering that photons are subjected to when transiting back after reflection from a target of interest. This is in contrast with many areas, such as optical microscopy and semiconductor wafer inspection, where more often than not, resolution is imposed by the diffraction limit.

The most conventional and oldest method of forming images is when a subject is illuminated by a light source with a broad beam pattern. The light reflected from the target can then be “imaged” by some type of camera system. Under the assumption that the observed resolution is limited by the point spread function (psf) of the medium, a simplifying assumption which is often true represents the observed image, I(x’,y’) as a convolution of the medium psf with the reflectance map, ρ(x,y) so that I(x’,y’)=psf(x,y) ⊗ ρ(x,y) (⊗ is the convolution operator). Equivalently, the observed image can be represented as I(x’,y’)=∫I(x,y) psf(x’-x, y’-y)dxdy. The linear systems theory which describes this process has been extensively covered in standard texts [1] and will not be elaborated further here.

One common goal in imaging research has been to increase spatial resolution. This has been pursued in both hardware through the design of sophisticated systems and also in software through the use of signal processing algorithms. As one option, common in microscopy, the use of a scanning source and receiver can present substantial advantages over non-scanning systems. So, for example, in the case of confocal optical imaging, the observable diffraction-limited point spread function is the square of the more traditional, non-confocal point spread function [2]. This leads to increased image resolution via a narrowing of the overall system point spread function.

In this article a problem of long standing interest to those wishing to explore and work in the ocean is considered; optically imaging the underwater environment with artificial illumination. Under almost all circumstances, underwater viewing is limited due to the turbidity of the environment. The effect of the suspended water, particles, and organisms is to both attenuate and scatter light. The ranges at which informative images can be obtained vary greatly. In practice, under the most ideal situations, ranges of less than a hundred meters are possible. A complicating factor is the severe backscatter, or volume scatter, which creates a large veiling glow which shrouds image contrast. Practical solutions in order to circumvent this effect concern the use of either large camera light separation, scanned, or pulsed systems [3].

The latest generation of underwater optical imaging systems are not limited by this backscatter effect and are constrained more by the spatial low pass nature of the forward scatter of light as it travels to the camera after reflection from the target [4]. One class of underwater imaging system that has shown good performance is known as the Laser Line Scan Systems. These systems have been developed over the last decade and have been used primarily to image the sea floor and objects on it. Figure 1 shows an illustration of this type of system mounted on an underwater vehicle [4]. Reference [4] also contains numerous images from this system. An interesting feature of these images is that they seem to display a “crispness” that is not evident in more traditional imaging systems. As described in [5], the overall effect of scanning a source and a receiver with small angular view will have advantages in increasing system resolution via the elimination of forward scattered light. A natural question that arises in examining the performance of these systems is: What is the maximum observable resolution? Is it possible to continuously increase the system resolution by decreasing source and receiver field of view or is there some asymptotic bound for “delta function-like” source beam pattern and source field of view.

 figure: Fig. 1.

Fig. 1. (a) An example of a laser line scan system deployed on an underwater vehicle. (b) A simplified schematic diagram of the four-receiver LLS sensor which has been used for either fluorescence or color imaging. The diagram shows the optics associated with propagating a single color laser into the water and then receiving it on a set of 4 channels, each with its own PMT. (Reprinted courtesy of [4]).

Download Full Size | PDF

2. Mathematical formulation

2.a. Preliminary considerations:

An important consideration in understanding the potential performance of underwater imaging systems is the basic physics of the propagation of light in water. Fortunately, the propagation of light in water is well understood and has been treated in standard texts [6, 7]. One approach to predicting underwater imaging system performance is to incorporate this information into computer models whose goal is to simulate their performance in a variety of environments [8, 9]. A brief summary of the basic facts is provided here in order to provide a foundation for the proposed imaging model.

Optical oceanographers and limnologists refer to the very few physical parameters that govern the propagation of light underwater as the inherent optical properties [10]. Neglecting polarization effects, the absorption coefficient: a, and the volume scattering function: β(θ) are all that is needed to characterize the medium. Radiative fields can then be predicted from knowledge of the input light field. Since these parameters are a function of both three dimensional location and wavelength, the inherent optical parameters can be represented as a(r ,θ,λ), andβ(r ,θ,λ) over a three dimensional volume (V) (r ∈V). The total attenuation: c is defined as a+b (where b is the total scattering coefficient, derivable from β(θ)). Although this small number of parameters represents a somewhat complete description of the optical properties of the medium, the computation of light fields can be quite complicated depending on the input light source distribution.

A common approximation in modeling image formation underwater is that the medium is homogeneous and that the light source is monochrome. This therefore characterizes the environment by a single set of inherent optical properties {a, β(θ)}. Although more complex formulations are possible, this will constitute a basic assumption of the proposed treatment. Note that this treatment also neglects factors which convert light from one wavelength to another such as fluorescence or Raman scatter.

A simple and useful way which permits the transition of the physics to underwater imaging is to characterize the impulse response of the medium to a source with a delta function-like beam pattern at location (r ) propagating in direction γ . Assuming that this source is propagating light along the z axis in the direction of positive z, one can characterize the irradiance due to this point source at distance d as I(x, y;d). This is identical to the point spread function (psf(x,y;d)) of the medium. This is all that is needed in order to characterize the irradiance of the light field at a given distance d, I(x,y;d), from an output light field at the z origin, I(x,y,:0).

As one example, consider

I(x,y;d)=I(x,y;0)(exp(cd)δ(x,y)+g(x,y;d)),

where δ(x, y) is two dimensional delta function. The first term inside the parenthesis (to the right of the convolution sign) can be interpreted as the irradiance on plane d due to an unscattered light beam which has attenuated by exp(-cd). The second term can be considered to contain scattered light.

McGlamery [8] considered the kernel g(x, y;d)to be

g(x,y;d)=(exp(Gd)exp(cd))1{exp(Bdf)}.

Here ℑ-1(exp(-Bdf)) indicates that an inverse Fourier transform is being taken of an exponentially decaying function of an empirical coefficient B, the distance d, and spatial frequency, f, where f is the radial distance from the origin in the frequency plane. G is an empirical constant which determines (along with c) how much power is contained in the second term of (1). Note that other parameterizations of the point spread function are possible [11].

As the computation of the irradiance at plane d is critically dependent on the form of the convolution operator, the derivation of the point spread function from the inherent optical properties has been of interest. Clearly, the small angle scatter in the volume scatter function affects the shape of the point spread function in an important way. Analytic approximations to this transformation (from volume scattering function to point spread function) have been considered under a small angle approximation [5, 12]. In addition, Monte Carlo simulations have verified both the form of this transformation and also the assumption that the system can be viewed as a linear system for large numbers (5–7) of total attenuation lengths, or e-folding distances, for c [13]. This assumption is embedded in formula (2).

2.b. Derivation of the overall point spread function

We next consider the incorporating the above treatment into an understanding of a general class of imaging systems that includes the synchronous scan imaging systems. Although a complete description of the imaging system would contain the effects of the lenses on the image formation process, a simple approach is taken here in order to describe the salient processes. With reference to Fig. 2, a simple and fixed transmitter propagates a beam of light onto an attenuating screen. The pattern incident on the screen is the convolution of the transmit beam pattern with the impulse response of the medium, as described above. After passing through the attenuating screen, the resultant transmit pattern is then propagated through the medium to another plane where the receive aperture is located. The receive aperture then integrates the energy which falls on it. This characterizes the response of the system for a single location of the source and receiver. In practice, source and receiver are translated in order to form an image; however it is also possible to translate the attenuating screen. This is the mathematical approach taken here.

 figure: Fig. 2.

Fig. 2. A diagram to illustrate the geometry of the approach taken. Starting on the left, the figure shows the projection of the source onto the plane (x0,y0) resulting in the light field I(x0,y0)=wT. This irradiance is then propagated to the right to just before plane (x1,y1) to obtain I-(x1,y1) by convolving this with the point spread function (psf) of the medium. The light is then attenuated by the screen T(x1,y1) via multiplication to yield light distribution I+(x1,y1). Propagation to the plane (x2,y2) occurs via convolution with the psf to yield light distribution I(x2,y2). This light field is then integrated over the region shown as wR (the small circle inside the ellipse) to yield a single value for the image. An image can be formed by translating the attenuating screen by (x’,y’): T(x1- x’,y1-x’) in order to obtain I(x’,y’).

Download Full Size | PDF

The treatments starts by representing wT (x, y) as a weighting function for the transmitted beam and wR(x, y) as a weighting function for the receiver (assumed to be a single optical element). Figure 2 shows that the light which is incident on the attenuating screen can be represented as originating from plane x0,y0 with pattern I(x0,y0)=wT(x0,y0). A useful way of thinking about this is as the projection of light onto the target plane in the absence of medium effects. Next, the result of transmitting this pattern through the medium results in the light being both attenuated and spread via convolution with a point spread function which is dependent on the medium and the distance d: psf(x0,y0,x1,y1:d). The light incident on the target screen can be represented as:

I(x1,y1)=psf(x0,y0,x1,y1;d)wT(x0,y0).

Next, attenuation by the screen results in a light pattern just after passing through the target screen as

I+(x1,y1)=I(x1,y1)T(x1,y1).

This pattern is then propagated to the location of the receiver. The propagation can be represented, (as before), as a convolution with the medium psf as:

I(x2,y2)=psf(x1,y1,x2,y2:d)I+(x1,y1)

As a final step in the image formation for a single location of the scanner, this irradiance distribution is integrated over the field of view of the receiver with the weighting function wR(x2,y2) to form a single value.

I(x,y)=wR(x2,y2)I(x2,y2)dx2dy2

An image can then be formed by scanning the target in the variable -x’, -y’ (an inverted image) so that entire image formation process can be represented as:

I(x,y)=wR(x2,y2)psf(x1,y1,x2,y2:d)psf(x0,y0,x1,y1:d)wT(x0,y0)T(xx1,yy1)dx2dy2.

Writing the convolutions out, the process can be represented as:

I(x,y)=
wR(x2,y2)wT(x0,y0)psf(x2x1,y2y1:d)psf(x1x0,y1y0:d)T(xx1,yy1)dx0dy0dx1dy1dx2dy2.

Although this equation looks a bit onerous, it can be manipulated into a form that yields substantial physical insight via:

wR(x2,y2)psf(x2x1,y2y1:d)dx2dy2=wR(x2,y2)psf(x2,y2,x1,y1:d).

Letting

wRP(x1,y1:d)=wR(x2,y2)psf(x2,y2,x1,y1:d)

The expression now becomes

I(x,y)=wRP(x1,y1:d)wT(x0,y0)psf(x1x0,y1y0:d)T(xx1,yy1)dx0dy0dx1dy1.

Similarly, setting

wTP(x1,y1:d)=wT(x0,y0)psf(x1x0,y1y0:d)dx0dy0=wT(x0,y0)psf(x1,y1,x0,y0:d)

the expression simplifies to

I(x,y)=wRP(x1,y1:d)wTP(x1,y1:d)T(xx1,yy1)dx1dy1.

The advantage of this expression is that it references the entire imaging process to the (x1, y1) plane. It also highlights the symmetry in that the beam pattern for the transmitter and the receive pattern for the receiver can be interchanged with identical results.

Finally, this expression can be viewed as a convolution with the composite operator

wRP,TR(x1,y1:d)=wRP(x1,y1:d)wTP(x1,y1:d),sothat
I(x,y)=wRP,TR(x1,y1:d)T(x1,y1,x,y).

The overall point spread function for the system is therefore equal to wRP,TR (x1,y1:d) which is defined in equation (14).

2.c. Consideration of the individual cases

In the above discussion, a general expression has been derived for the performance of a system that synchronously scans both source and receiver. Here, the overall point spread function is examined under different configurations for the source and receiver, specifically for different wR and wT functions. The four cases treated are when source and receiver are either delta functions or constant functions. The case where either function is a delta function corresponds to the limiting case for a very small propagating beam for the transmitter and/or a very small field of view for the receiver. The constant case, where either wR(x2,y2) or wT(x0,y0)=C, independent of x,y, corresponds to when either the transmit beam or receiver is omni-directional and covers the entire plane.

Case 1: wR(x2,y2)=δ(x2,y2) and wT(x0,y0)=δ(x0,y0).

Therefore: wRP (x1,y1:d)=psf(x1,y1:d) and wTP (x1,y1:d)=psf(x1,y1:d), so that

wRP,TR(x1,y1:d)=psf(x1,y1:d)psf(x1,y1:d)=(psf(x1,y1:d))2,

Cases 2 & 3: wR(x2,y2)=δ(x2,y2) and wT(x0,y0)=C or

wR(x2,y2)=C, and wT(x0,y0)=δ(x2,y2).

Here, again, wRP (x1,y1:d)=psf(x1,y1:d). In addition, assuming that the point spread function is normalized so that ∫psf(x1- x0,y1- y0:d)dx1,dy1=1 for all (x0,y0) implies, that wTP (x1,y1:d)=1, so that

wRP,TR(x1,y1:d)=Cpsf(x1,y1:d)(Case2)

The same is true for the second sub case because

wRP (x1,y1:d)=1 and wTP=psf(x1,y1:d), so that

wRP,TR(x1,y1:d)=Cpsf(x1,y1:=d)(Case3)

Case 4: wR(x2,y2)=C and wT(x0,y0)=C.

Here both wRP (x1,y1:d) and wTP (x1,y1:d)=C (as considered above) so that

wRP,TR(x1,y1:d)=C2.

Consideration of these 4 cases highlights the fact that the highest resolution image is formed in Case 1, when both the source beam pattern and the receive beam pattern are delta functions. In this case, the overall point spread function for a cylindrically symmetric point spread function is the square of the medium point spread function. Note that Cases 2 and 3 result in the overall point spread function being the one-way point spread function of the medium. This should not be surprising as Case 2 is identical to the conventional imaging case where the target is illuminated with a broad beam and is imaged with a camera with each element subtending a field of view which is small relative the be medium psf. Case 4 has the worst resolution where no image is possible, having broadcast a wide beam and integrated over the entire image, only a constant value is recorded as a function of target translation.

3. An experimental example

In this section the above theory is illustrated via the use of data from an experimental underwater imaging system. The system was developed in the author’s lab and has been used in several scientific expeditions to record 3-dimensional information about the sea floor [14, 15]. Figure 3 illustrates that the systems consists of a scanning laser beam and a 1- dimensional CCD chip. The image plane of the CCD camera is coplanar with the plane of illumination by the laser. Under typical operating circumstances the system is operated in a mode where the laser is pulsed into a single look direction and the reflected light is then imaged by the entire CCD chip. The direction of the light beam is then changed slightly and the CCD chip records another vector of values. The process is repeated for each pointing direction of the laser in order to scan a single line of the sea floor. Since each pointing angle of the laser results in a vector of data values (the output of the CCD chip), the collected data resulting from a single scanned line is a two dimensional matrix who’s columns are the pointing angles of the laser and who’s rows are a value proportional to the light received on the CCD chip, integrated over some small solid angle. Towing the system over terrain creates a raster scan-like image of the sea floor.

In this section the practical advantage of using a synchronous laser line system is explored via the processing of data from a simple set of experiments. In practice, the system computes target reflectance by taking the maximum value of the CCD output for each of the pointing angles. This provides an adaptable scheme for garnering the benefits of laser scan systems without increasing the field of view of the receiver as is needed for some systems. Used in this manner, the system will then correspond to a Case 1 imaging system as considered above with wT(x0,y0) corresponding to the transmit beam pattern of the laser and wR(x2,y2) corresponding to the field of view of a single CCD pixel. In order to compare the results obtained for this configuration to that of a conventional system, an omni-directional transmitter was simulated by integrating the two-dimensional output array over the columns. This produces a vector output which can be regarded as a single line of an image which would have been recorded if a constant illumination source was used over the angular extent of the integrated incident light.

 figure: Fig. 3.

Fig. 3. (a) A 3-dimensional model of the scanning laser system used for the simulations. (b) A schematic diagram illustrating the basic components of the system in (a).

Download Full Size | PDF

The experimental data was acquired in a large tank filled with seawater by suspending a target consisting of a white bar on a black background below the system at a range of 3.85 meters. Experimental data, a result of scanning the beam over the target, was acquired at different levels of turbidity by adding Maalox. The inherent optical properties of the water were estimated by using a transmissometer at a wavelength (488 nm) close to that of the laser (532 nm). Table 1 summarizes the measured values for the set of experimental data collection runs. Note that the value for c for Test (a) corresponds to “clear” filtered sea water where no Maalox was added (and unfortunately, no transmissometer measurement was made).

Tables Icon

Table 1. The total attenuation coefficients measured for the experiments

Data was analyzed by extracting a region from the two dimensional data set which included both the bar target and also the light scattered from it In order to simulate a conventional image collection system, the two dimensional output for a single scan line was integrated along the columns over some subset of pointing angles, or rows. This produced a vector of simulated data for each of the turbidity conditions listed in Table 1. In contrast, taking the maximum value of the output of the CCD camera for each of the pointing angles formed the simulated laser line scan image.

Although the subjective impression was that the resolution for the simulated laser line system was better than that of the conventional image, a simple processing scheme was implemented to validate this conjecture. The processing consisted of normalizing the two data sets together by first subtracting the minimum value of each set from the parent data set. This was then followed with renormalization so that the DC component of the Fourier Transform would be equal to one. A comparison of the absolute value of the FFT for each of the data sets and for each simulated condition is shown in Figure 4. Given that the analytic FFT of the bar would be a sin(x)/x pattern, the data display a damped version of this function, presumably due to the effects of the medium. The plots reveal that, in all cases, the line scan system achieves superior resolution in comparison to the conventional imaging system. A more sophisticated analysis would likely attempt to measure the actual medium psf, the beam pattern of the transmitter, and compare the predicted results to the experimentally observed data set. These endeavors have not been undertaken here and are referred to a future publication.

 figure: Fig. 4.

Fig. 4. A comparison of the absolute values of the Fourier Transforms for the conventional imaging system (red) versus the scanning system (blue) for the four data sets. The total attenuation values for the environmental conditions are shown in Table 1.

Download Full Size | PDF

4. Conclusions- Implications for Underwater Optical Imaging

In this article, a general theory for understanding the resolution performance of synchronous laser line scan systems has been developed. The results indicate that under the most ideal of circumstances the achievable resolution will be equal to the square of the medium point spread function. This seems to be a natural result that, in a way similar to that of other scanning systems, achieves higher resolution than conventional systems [2]. The fact that narrow beam systems may achieve higher resolution than a system with either just a narrow beam source or receiver was highlighted in [5]. However, the treatment here is more complete in that a simple expression for the limiting bound on resolution is derived. In addition, the results present the theory in the context of an imaging system that is readily interpretable.

As one consequence of the theory, it is interesting to note that scanning systems will not always results in higher resolution than non-scanned ones. So, for example, the exact nature of the weighting functions: wT(x0,y0) and wR(x2,y2), dictate when it may or may not be productive to use a scanning system. Clearly, a wide incident beam pattern with a wide receiver pattern will worsen the resolution. However, implementing a set of very small beam patterns for the receiver is easy via the use of modern camera technology. Nevertheless, when receiver sensitivity is paramount, the use of PMT’s for a single receiver may outweigh the associated complexities in creating such a system. Providing the beam pattern for the receiver is narrow enough, superior resolution can then be achieved.

Finally, we note that the treatment can also be extended to the propagation of light in other turbid media. So, for example, systems that use a scanning light beam to scatter from volumetric targets in biological media that are subject to attenuation and scatter can be interpreted using the theory developed here. The resultant limitations due to scattered light can then be readily derived for this class of imaging systems.

Acknowledgments

The author would like to than the Office of Naval Research, Environmental Optics and the U.S. Army Medical Research and materiel Command under W81XWH-04-1-0660 for support. The author would also like to thank Karl D. Moore for collecting the data used to illustrate the differences between the line scan systems versus conventional systems and Paul Roberts for reviewing an earlier version of the manuscript.

References and links

1. J. W. Goodman, Introduction to Fourier Optics, 2nd edition, (McGraw Hill, Massachusetts, 1996).

2. T. Wilson, T., and C. Sheppard, Theory and Practice of Scanning Optical Microscopy, (Associated Press, London, U.K, 1984).

3. J. S. Jaffe, “Computer Modeling and the Design of Optimal Underwater Imaging Systems,” IEEE J. of Ocean Engineering 15, 101–111 (1990). [CrossRef]  

4. J. S. Jaffe, J. McClean, M. P. Strand, and K. D. Moore, “Underwater Optical Imaging: Status and Prospects,” Oceanography 14, 64–75 (2002). [CrossRef]  

5. E. P. Zege, A. P. Ivanov, and I. L. Katsev, Image Transfer through a Scattering Medium, (Springer Verlag, Berlin, Germany, 1991) [CrossRef]  

6. K. S. Schifrin, “Physical Optics of Ocean Water,” (American Institute of Physics, New York, 1988)

7. C. D. Mobley, “Light and Water. Radiative Transfer in Natural Waters,” (Academic Press, New York, 1994).

8. B. J. McGlamery , “A Computer Model for Underwater Camera Systems,” in Ocean Optics VI, S. Q. Duntley, Ed., SPIE28, (1979).

9. J. S. Jaffe, “Monte Carlo Modeling of Underwater Image Formation: Validity of the Linear and Small-angle Approximations,” Applied Optics. 34, 5413–5421, (1995). [CrossRef]   [PubMed]  

10. R. W. Preisendorfer, “Hydrologic Optics, Vol II: Foundations,” Honolulu, HI: U. S. Dept of Commerce, (1976).

11. K. J. Voss, “Simple Empirical Model of the Oceanic Point Spread Function,” Applied Optics 30, 2647–2651, (1991). [CrossRef]   [PubMed]  

12. W. H. Wells, “Loss of resolution in water as a result of Multiple Small-Angle Scattering,” J. Opt. Soc. Amer. 59, 686–691, (1969). [CrossRef]  

13. J. S. Jaffe, “Monte Carlo Modeling of Underwater Image Formation - Validity of the linear and small angle approximations,” Applied Optics 34, 5413–5421, (1995) [CrossRef]   [PubMed]  

14. K. D. Moore, J. S. Jaffe, and B. L Ochoa, ‘Development of a new underwater bathymetric laser imaging system: L-bath,” Journal of Atmospheric & Oceanic Technology 17, 1106–1117, (2000). [CrossRef]  

15. K. D. Moore and J. S. Jaffe, “Time-evolution of high-resolution topographic measurements of the sea floor using a 3-D laser line scan mapping system,” IEEE Journal of Oceanic Engineering 27, 525–545, (2002). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) An example of a laser line scan system deployed on an underwater vehicle. (b) A simplified schematic diagram of the four-receiver LLS sensor which has been used for either fluorescence or color imaging. The diagram shows the optics associated with propagating a single color laser into the water and then receiving it on a set of 4 channels, each with its own PMT. (Reprinted courtesy of [4]).
Fig. 2.
Fig. 2. A diagram to illustrate the geometry of the approach taken. Starting on the left, the figure shows the projection of the source onto the plane (x0,y0) resulting in the light field I(x0,y0)=wT. This irradiance is then propagated to the right to just before plane (x1,y1) to obtain I-(x1,y1) by convolving this with the point spread function (psf) of the medium. The light is then attenuated by the screen T(x1,y1) via multiplication to yield light distribution I+(x1,y1). Propagation to the plane (x2,y2) occurs via convolution with the psf to yield light distribution I(x2,y2). This light field is then integrated over the region shown as wR (the small circle inside the ellipse) to yield a single value for the image. An image can be formed by translating the attenuating screen by (x’,y’): T(x1- x’,y1-x’) in order to obtain I(x’,y’).
Fig. 3.
Fig. 3. (a) A 3-dimensional model of the scanning laser system used for the simulations. (b) A schematic diagram illustrating the basic components of the system in (a).
Fig. 4.
Fig. 4. A comparison of the absolute values of the Fourier Transforms for the conventional imaging system (red) versus the scanning system (blue) for the four data sets. The total attenuation values for the environmental conditions are shown in Table 1.

Tables (1)

Tables Icon

Table 1. The total attenuation coefficients measured for the experiments

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ; d ) = I ( x , y ; 0 ) ( exp ( cd ) δ ( x , y ) + g ( x , y ; d ) ) ,
g ( x , y ; d ) = ( exp ( Gd ) exp ( cd ) ) 1 { exp ( Bdf ) } .
I ( x 1 , y 1 ) = psf ( x 0 , y 0 , x 1 , y 1 ; d ) w T ( x 0 , y 0 ) .
I + ( x 1 , y 1 ) = I ( x 1 , y 1 ) T ( x 1 , y 1 ) .
I ( x 2 , y 2 ) = psf ( x 1 , y 1 , x 2 , y 2 : d ) I + ( x 1 , y 1 )
I ( x , y ) = w R ( x 2 , y 2 ) I ( x 2 , y 2 ) d x 2 d y 2
I ( x , y ) = w R ( x 2 , y 2 ) psf ( x 1 , y 1 , x 2 , y 2 : d ) psf ( x 0 , y 0 , x 1 , y 1 : d ) w T ( x 0 , y 0 ) T ( x x 1 , y y 1 ) d x 2 d y 2 .
I ( x , y ) =
w R ( x 2 , y 2 ) w T ( x 0 , y 0 ) psf ( x 2 x 1 , y 2 y 1 : d ) psf ( x 1 x 0 , y 1 y 0 : d ) T ( x x 1 , y y 1 ) d x 0 d y 0 d x 1 d y 1 d x 2 d y 2 .
w R ( x 2 , y 2 ) psf ( x 2 x 1 , y 2 y 1 : d ) d x 2 d y 2 = w R ( x 2 , y 2 ) psf ( x 2 , y 2 , x 1 , y 1 : d ) .
w RP ( x 1 , y 1 : d ) = w R ( x 2 , y 2 ) psf ( x 2 , y 2 , x 1 , y 1 : d )
I ( x , y ) = w RP ( x 1 , y 1 : d ) w T ( x 0 , y 0 ) psf ( x 1 x 0 , y 1 y 0 : d ) T ( x x 1 , y y 1 ) d x 0 d y 0 d x 1 d y 1 .
w TP ( x 1 , y 1 :d)= w T ( x 0 , y 0 )psf( x 1 x 0 , y 1 y 0 :d)d x 0 d y 0 = w T ( x 0 , y 0 )psf( x 1 , y 1 , x 0 , y 0 :d)
I ( x , y ) = w RP ( x 1 , y 1 : d ) w TP ( x 1 , y 1 : d ) T ( x x 1 , y y 1 ) d x 1 d y 1 .
w RP , TR ( x 1 , y 1 : d ) = w RP ( x 1 , y 1 : d ) w TP ( x 1 , y 1 : d ) , so that
I ( x , y ) = w RP , TR ( x 1 , y 1 : d ) T ( x 1 , y 1 , x , y ) .
w RP , TR ( x 1 , y 1 : d ) = psf ( x 1 , y 1 : d ) psf ( x 1 , y 1 : d ) = ( psf ( x 1 , y 1 : d ) ) 2 ,
w RP , TR ( x 1 , y 1 : d ) = C psf ( x 1 , y 1 : d ) ( Case 2 )
w RP , TR ( x 1 , y 1 : d ) = C psf ( x 1 , y 1 : = d ) ( Case 3 )
w RP , TR ( x 1 , y 1 : d ) = C 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.