Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enhanced extended range underwater imaging via structured illumination

Open Access Open Access

Abstract

In this article, the utility of structured illumination in order to enhance the contrast and subsequent range capability of an underwater imaging system is explored. The proposed method consists of transmitting a short pulse of light in a grid like pattern that consists of multiple, narrow, delta-function like beams. The grid pattern can be arranged in either a one-dimensional line or an area as a two-dimensional pattern. Scanning the pattern in time results in the sequential illumination of the entire scene. The receiving system architecture imposes the exact same, grid-like pattern sensitivity on the reflected light with a simple subsequent superposition of the time-sequenced images. The system can be viewed as a parallel implementation of a Laser Line Scan System where multiple beams are projected and received instead of a single one. The performance enhancement over more conventional systems that project either a sheet or an area of light is compared for a challenging underwater environment via computer simulations. The resulting images are analyzed as a function of the spacing between the projected light beams to characterize contrast and resolution. The results indicate that reasonable gains are obtainable for close spacing between the beams while quite significant gains are predicted for larger ones. Structured illumination systems can therefore collect images more rapidly than systems that scan a single beam; however with concomitant trade-offs in contrast and resolution.

©2010 Optical Society of America

1. Introduction

Underwater optical imaging continues to be an important method used to explore the oceans. In many situations, the ocean presents a relatively clear environment for imaging and this has allowed great strides in environmental characterization. However, optical turbidity occurs in many interesting situations where underwater visibility can be greatly hampered. Methods to increase the contrast, range, and hence, utility of underwater images have therefore been under investigation for many years. Although some fundamental limits to underwater imaging were established by Duntley and colleagues in the 1960’s and 70’s [1] and are well understood, as explained in several classic books on underwater light propagation [2,3], the latest generation of optical equipment continues to create new opportunities for increasing the utility of underwater images with increased range and contrast.

As a brief reminder, the basic physics of light propagation in the sea considers both the attenuation and scattering of light in the more optically transparent window of the electromagnetic spectrum of 400 nm – 700 nm. Total light attenuation per meter (c) consists of both the absorption (a) and scatter (b) that a photon may suffer upon propagation from one location (r0) to another (r), as is described by the simple exponential law in Eq. (1):

I(r)=I(r0)ec(rr0)=I(ro)e(a+b)(rro).
The angular dependence of scatter is taken into account via b(θ), the scattering coefficient in the direction θ,that considers the scatter per unit meter, per unit steradian so that as described by Eq. (2):
b=4πb(θ)dω.
The integral is taken over all 4π steradians. The more common volume scatter, or phase function, is the probability that a photon can be scattered into solid angle dω as in Eq. (3)
14π4πβ(θ)dω=1.
β(θ) is therefore a normalized and scaled version of b(θ). In the case that polarization is neglected, a, b, and β(θ)are a complete set of environmental variables that are needed to understand and predict the propagation of light in homogeneous media.

In general, the overall goal of the underwater imaging system designer is to maximize range, contrast, or both. This is accomplished via the employment of either simple or exotic optical equipment. In the simplest case, a camera and an underwater light source are positioned to take into consideration the environmental conditions, the beam pattern and power of the light source, and the sensitivity and dynamic range of the camera. As is well known the placement of lights and camera play a critical role in reducing the backscatter that occurs if lights and cameras are placed too closely. To obtain improved range and contrast, however, more exotic systems that employ time-gated lasers, cameras, and mechanical scanning, can be used to greatly improve the quality of underwater images over ranges possible with a simple system.

As one characterization of optical system performance, the number of inverse attenuation coefficients, or “1/c” attenuation lengths, at which the system can yield adequate performance is specified. However, since the ratio of a to b can vary, this characterization suffers from several ambiguities. Nevertheless, typical environments can be characterized that utilize some ratio of the single scattering albedo, b/c, together with assumptions about the shape of the volume scatter function. Predictions of the performance of an optical system are then based on assumptions about the geometric configuration of the equipment and the physical properties that relate to the power and sensitivity of the light source and imaging system [4]. As one categorization, good images at ranges greater than or equal to three attenuation lengths are extended range images, as these ranges are not easily obtained via simple methodology.

Another categorization of underwater imaging systems relates to illumination. Examples are when an area, a line, or a point is illuminated. Since, ultimately, an image of an area is desired, the line and point are scanned in either one or two directions to produce an image. Scanning systems with narrow light sheets or small, collimated single, beams have an inherent advantage in that their illumination of a small part of the subject can be used with collimated receive optics to increase contrast over systems that illuminate wider areas [5]. Figure 1 contains images from each type of system (area, line, point) at 3 attenuation lengths in two-meter (1/c) water. The figure illustrates that the clearest images can be obtained with the point scanning system, the next best images being produced by the line scanning system with the area illumination images being the worst.

 figure: Fig. 1

Fig. 1 Resultant images for different illumination strategies. (a) An area. (b) A line scanned in one-dimension perpendicular to the line. (c) A point scanned in 2-dimensions.

Download Full Size | PDF

These results support the claim that a necessary sacrifice in obtaining the best possible images is to scan in one or two dimensions. While this requires more exotic technology, it also takes longer and leads to more complicated receive electronics. A method that would reduce both the complexity and the amount of scanning time is therefore desirable.

In this article we consider the use of structured illumination in order to fulfill this role. The basic idea is illustrated in Fig. 2 . Here, a two-dimensional version of the concept is shown where a grid pattern is projected onto the target. The grid pattern is then scanned over the distance between grid points, in parallel, in 2-dimensions, with sequential image acquisition, so that an entire area is illuminated over multiple light pulses. In the one-dimensional version, a set of narrow beams are projected within the line and then scanned between grid points. Successive lines patterns are then projected in order to scan an entire two-dimensional area.

 figure: Fig. 2

Fig. 2 A block diagram of the proposed hardware configuration

Download Full Size | PDF

One potential hardware implementation is also shown in Fig. 2. The system uses a pulsed laser in conjunction with a DLP projector array to create a structured light pattern. Note that the same DLP can be used in a mono-source, confocal-like configuration to both project and spatially filter light from the identical area. Images recorded with a range-gated camera from each pulse are simply superimposed.

In the limit that the beams are farther apart than the distance between the far ends of the array, a single beam is scanned in one or two dimensions. In the one-dimensional case, translation of the vehicle accommodates scanning the sea floor in the orthogonal direction and this geometry mimics that of a pulsed Laser Line Scan System. In the two-dimensional case, a single beam is laboriously scanned over an entire image, a time consuming process. Successful implementation of structured illumination could therefore offer extended range performance for two-dimensional imaging, something that is currently not available.

Structured lighting has previously been proposed for use in confocal optical imaging systems [6]. The theory of how a confocal optical system works is very similar to that of a Laser Line Scan System except that, in the case of the confocal system, the subject is scanned in three dimensions, instead of two. Faster scan rates are therefore desirable in confocal imaging as well. As suggested in [6] a sinusoidal pattern can be projected at three locations, imaged, and processed to obtain confocal-like images. Other uses for structured lighting are commonplace in that the displacement between a transmitted and received beam are used for triangulation. These methods are marginally related to the method proposed here, however; the use of a line sheet was proposed by Jaffe [7] and does offer the dual benefit of higher contrast as well as the potential for range discrimination via triangulation-like procedures.

2. Underwater imaging methodology

2.a Preliminary considerations

To explore the ramifications of using structured illumination in underwater imaging, a system of computer programs was employed. Here, the task was split into two. As is widely acknowledged, in the absence of absorption, the Point Spread Function (PSF) of the media is the fundamental environmental characteristic. The first phase therefore consisted of computing the overall PSF by observing the irradiance on a plane as a function of distance from a narrow, 1 mrad source. Note that absorption was ignored for the purpose of these computations as the efficiency of underwater imaging systems can vary greatly and is highly dependent on the light power and collection apparatus. This work therefore centers on enhancing resolution by decreasing the forward scattering components. Also, a gated system is envisioned that would completely eliminate backscatter. A real implementation of this idea would need to take into account all of these factors.

The derived PSF was then used in a second set of computer programs that performed the necessary convolutions, multiplications, and additions to compute a two dimensional matrix that contains the predicted image. Section A considers the first program to simulate the PSF while Section B describes the programs that used this information to create the concomitant images. Section C describes image normalization and analysis procedures that were used to characterize the output images.

2.b Computing the Point Spread Functions

Computer modeling of underwater image formation has been in use and under development for many years. One set of such programs was developed by B. McGlamery and colleagues [9]. These programs were then adapted and extended [7] to run on contemporary workstations. Additional insights into the process of imaging in scattering environments were investigated in [8,10]. These studies included the case of underwater imaging.

The mathematical model and computer implementation was authored by E. Zege and colleagues. The program uses a multi-component method to solve the radiative transfer equation [10] that can accommodate a variety of oceanic volume scattering functions [11,12]. It is quite efficient as it uses an optical reciprocity theorem [10], the only assumption being that the phase function of the medium has a sharp forward peak, which is inherent to the ocean water. Multiple scattering into small angles and only single scattering onto large angles are accounted to compute the radiance as explained in [13]. A recent extension of these methodologies to plane parallel environments has been accomplished [14].

Simulations were focused on the prospective image improvements in relatively turbid, two attenuation length water, characterized as “bay”. As such, only a single set of PSFs were computed as a function of range. Environmental input to the model is summarized in Table 1 . The camera’s field-of-view was 60°, yielding a resolution of 4.1 mrads/pixel over 256 x 256 pixels. Figure 3 shows a slice through each of four cylindrically symmetric PSFs. The four quadrants correspond to distances of (2, 6, 10, 14) meters that are approximately (1, 3, 5, 7) total attenuation lengths in range. The blue curve is the light incident on the target after scatter by the medium, as imaged by the camera. The red curve is a forward scattered component after reflection by the target. The PSF for the path from source to target plane is the blue curve while the PSF for the return path is the sum of the blue and the red curves. Figure 3d especially illustrates that the beam broadening due to the two-way path results in a much more spread function than the one-way path.

Tables Icon

Table 1. Parameters used in the simulation to determine the Point Spread Function

 figure: Fig. 3

Fig. 3 Point Spread Functions for the simulations as imaged by the camera. The blue curve is the light incident on the target from a narrow, one mrad., projected beam. The red curve is the scattered light after reflection. The sum of the two quantities is the PSF. (a) one attenuation length (b) 3 attenuation lengths (c) 5 attenuation lengths (d) 7 attenuation lengths. Vertical units are watts/m2 incident on the camera. Horizontal units are view 4.1 mrad. / pixel.

Download Full Size | PDF

2.c Image synthesis methodology

The next phase of the image simulation employed the set of PSFs to compute a variety of images that would be collected with either the one or two-dimensional structured lighting geometry. The calculation involved several stages: 1) Calculation of the light incident on the reflectance map. The incident light pattern would be a grid along a line or in a two-dimensional array. 2) Calculation of the light incident on the camera after reflection from the map. 3) Multiplication of the reflected light by a “mask” identical to the projected light pattern: ie, a grid array of delta functions that were either ones or zeros. 4) Combining each of the images in the scan via a simple linear superposition to synthesize a final image.

A convenient formulation of the imaging chain is considered in [5]. As described in Eq. (4), an image can be computed as

I(x,y)=wR(x2,y2)psf(x1,y1,x2,y2:d)psf(x0,y0,x1,y1:d)wT(x0,y0).....
T(x-x1,y-y1)dx0dy0dx1dy1dx2dy2.
Here, an attenuating screen T(x’,y’) is scanned by (-x’,-y’) to produce the complete image I(x’,y’). Note (x0,y0) are coordinates of the source plane, (x1,y1) coordinates of the target plane, and (x2,y2) the coordinates of the camera plane. The functions wr and wt, are beam transmit and receive weighting functions. They describe either the transmit or receive pattern, as referenced to the (x1,y1) plane in the absence of medium effects. The two PSF functions, psf(x1,y1,x2,y2:d) and psf(x0,y0,x1,y1:d) are the point PSFs for transmission from plane (x1,y1) to (x2,y2) and from (x0,y0) to (x1,y1), referenced to the (x1,y1) plane.

In applying equation [4] to the case considered here the two PSFs and two weighting functions are identical to each other. The point PSFs are also identical, via reciprocity, as the PSF the light incurs in going from the source to the target is equal to the PSF in going from the target to the receiver since source and receiver are co-located. In addition, transmit and receive weighting functions were made identical to mimic the confocal aspirations of the systems.

In the calculations, three reflectance maps were used: (1) A constant value; (2) A simple delta function located in the center of the field-of-view; (3) A test target, shown in Fig. 1; consisting of a black and white pattern with variable width spokes radiating from a central circle that contains a black and white checkerboard.

Now consider the mathematical representation of the weighting functions for both one and two-dimensional grid spacing patterns. The two-dimensional pattern, Eq. (5), is:

wt=wr=Π2(Δx,Δy,Δsx,Δsy)=m=0mmax1n=0nmax1δ(x'ΔxnΔsx,y'ΔymΔsy).
Here Δx and Δy are integers that range from (0,0) to ((n-1)Δsx,(m-1)Δsy). They represent displacement for each of the snapshots that compose an entire scan. The values (Δsx, Δsy) are the distances between the incident spots. In addition, m and n are integers. Figure 4 illustrates the geometry for the two-dimensional case.For the one-dimensional pattern Eq. (6) describes the weighting functions as:
wt=wr=Π2(Δx,Δsx)=δ(x'ΔxnΔsx)
with (0 ≤Δx ≤(n-1) Δsx). Making use of the sifting property of the delta functions, the sampled image can be computed at points I(Δx+nΔsx,Δy+mΔsy)via substitution into the integral in Eq. (4), where (m=0,mmax1,n=0,nmax1)for the two-dimensional image and I(Δx+mΔsx) where (m=0,mmax1) for the each scan of the one-dimensionally scanned image.

 figure: Fig. 4

Fig. 4 The sampling geometry embodied in Eq. (5). The figure illustrates a two-dimensional field of delta functions with spacing ΔSx in the x direction and ΔSy in the y direction with. Note that here Δx = 1 and Δy = 1 as the delta function grid is shifted by one unit in each direction.

Download Full Size | PDF

The final image for the two-dimensional case is described by Eq. (7):

I2(x,y)=Δy=0(m1)ΔsyΔx=0(n1)ΔsxI(Δx+mΔsx,Δy+nΔsy).

For the 1-dimensional version, each line for a given y value can be represented by Eq. (8) as

I1(x,y)=Δx=0(n1)ΔsxI(Δx+mΔsx|y).
Note that in the 1-dimensional case each line is collected via the use of the sheet illumination so the final image, in this case, can be represented by Eq. (9) as

I1(x,y)=yminymaxΔx=0(n1)ΔsxI(Δx+mΔsx|y).

The images shown in Fig. 1 are the result of using the spoke pattern as a target and using Eq. (4) with the above weighting patterns for the one and two-dimensional weighting patterns where (Δsx,Δsy)=(1,1),(1,291),and(291,291) for (a), (b), and (c). When the spacing is greater than the image size, 290, the image is scanned on a point-by-point basis in that dimension.

Note that the methodology employed here does not include the off nadir changes in image appearance due to increased path length. This includes changes in the PSF and also attenuation.

2.d Image processing and analysis

The above algorithms were applied in the bay water as a function of range and attenuation length using the set of three targets listed above. Figure 5 displays the results for the one and two-dimensional case where (Δsx=25) and(Δsx,Δsy)=(25,25). One undesirable feature of these images is that they are subject to a non-uniform gradient of brightness. This was a ubiquitous feature of all of the images that only went completely away when the spacing between the light beams was so large that the images were scanned on a single pixel basis. Upon investigation, it was found that the effect was mostly systematic and a result of the fact that the beams closer to the center of the images received more light, via the convolution, from their neighbors.

 figure: Fig. 5

Fig. 5 Several before and after images that show the results of the normalization. (a) The resulting image from (Δsx = 25); (b) After normalization; (c) Resulting image from (Δsx Δsy) = (25,25); (d) After normalization.

Download Full Size | PDF

A straightforward, and successful, approach to ameliorate this problem was to compute the image of a constant reflectance map. The image of this map was then used to normalize the maps obtained from the random and radial spoke target via a simple division. Hence, for all analyzed images I out = (I in (x,y)/I calib(x,y). Here, I in (x,y) is the spoke image and I calib(x,y) is the image of a constant target under the same geometric and environmental conditions. Figure 5b and 5d show the output that resulted when the images in 5a and 5c were normalized by images of the constant target. Additional processing for comparison, in some cases, was to scale the output so that the values were all between (0,1).

To characterize image contrast, a metric was computed for a central region of a horizontal line in each of the spoke images as I cont = (I whiteI black)/((I black + I white) /2). The metric is one way of computing fractional contrast and is very useful for characterizing the dynamic range necessary for the recording system.

Images were also analyzed to characterize resolution. One complication when the images are non-stationary is that the obtained images are not the convolution of an isoplantic PSF with the original image. This is true in this case because the central regions of the image are subject to more leakage from the side lobes of the PSF than the non-central region. Non-uniformity in Figs. 5a and 5c is primarily due to this effect. However, since the overall system is linear, an image can be considered to be composed of a set of spatially variant impulse responses, but now at each location. The resolution of the system was therefore characterized by computing the “system impulse response” to a numerical delta function located at the center of the field of view. A final image would be a superposition of such kernels, each computed at its respective location.

Lastly, the images were visually inspected. Although subjective interpretation cannot always be definitive, the human visual system is quite adept at picking up small changes and in evaluating resolution.

3. Results

3.a Preliminary consideration

In order to systematically explore the potential benefits and pitfalls of the structured illumination idea, simulations and subsequent data analysis were undertaken in these bay conditions. In the 1-dimensional case, simulated images were created at Δsx = (1, 4, 25, 100, 291) pixels. As the final target dimensions were 290 x 290, the last number resulted in the implementation of a single beam scan. In the two-dimensional scans, the values (Δsx Δsy) = ((1,1), (2,2), (5,5), (10,10), (25,25), (100,100), (291,291)) were used with the last value resulting in the simulation of a single, scanned beam.

3.b Contrast

Using the methodology described, a single value of contrast at the center of a line located 20 lines below the top and side of the images was computed. This was done so that a quantitative measure of image quality could be assessed. Note that the inverse calibration procedure was applied to the output results beforehand to compensate for the dark edges as well as to scale the results together. An example of the input data to this contrast computation is shown in Fig. 6 . Here, a set of lines, or slices, through each image at row = 20 are superimposed at ranges of (1,3,5,7) attenuation lengths from the spoke image for Δsx = 25. The figure illustrates the decrease in contrast between the black and white regions as the range increases with the highest contrast at the closest range with less contrast at the more distant ranges.

 figure: Fig. 6

Fig. 6 A graph of the values across a row for the radial spoke image at (Δsx = 25) as a function of attenuation length.

Download Full Size | PDF

For all ranges and spaces between beams, contrast values were contoured on a single graph for the one dimension row and column cases as well as the two-dimension case. Figure 7a and 7b contain contour diagrams for the one-dimension case with row = 20 and column = 20 respectively. Figure 8 shows the contours for the two-dimensional case. The contour plots display the number of attenuation lengths along the ordinate and the spacing between beams on the abscissa. The data illustrate the general effect that contrast decreases with range and increases with increased spacing.

 figure: Fig. 7

Fig. 7 Contour diagrams for the (a) row and (b) column contrast values.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 The contour diagram for the two-dimensional contrast values

Download Full Size | PDF

In analyzing the contour diagrams it is useful to think of a cutoff value below which it will be difficult to obtain useful images. While this value is somewhat conjectural, the contrast values are important for understanding the system’s necessary dynamic range requirement. For example, if the contrast is low, the requirements for system dynamic range are high, while if the contrast is high, good images can be obtained with even low dynamic range systems. This is because, in general, some level offsets recording electronics with image contrast superimposed. Inspection of Fig. 6 illustrates the idea. Here, contrast in the target values oscillate around a mean value with large contrast at the smaller ranges and low contrast at the lager ranges.

For a dynamic range system of 10 or 12 bits, a contrast value of 0.05 is reasonable as somewhere between 6 and 7 bits (1/26 < 0.05 < 1/27) are used to record the mean offset. Given this offset, a 10 or 12-bit system would then have somewhere between 24 to 26 values for grey level display. Similarly thinking, an 8 bit system would need contrast greater than 1/23 or 0.125 so that 25 grey levels would be adequate for image display. Taken as a minimum requirement, 0.05 was assumed as a cut-off for a reasonably high dynamic range system as many cameras are available today with 12-bit digitizers.

Figures 7a and 7b, the contour diagrams for the one dimension system, illustrate that the contrast values are higher for the columns than for the rows. This is due to the fact that the system scans the target in a row-by-row manner and therefore, in the column direction, it is closer to the single scan system than the grid or sheet illumination. In the case of the columns (Fig. 7b), images at one to three attenuation lengths have adequate contrast (>0.05) to be seen without any spacing whatsoever. In the case of the rows, images cannot be seen at three attenuation lengths, as the contrast is inadequate. However, increasing to Δsx = 4 improves the row contrast so that the images are almost discernable at three attenuation lengths. Comparison of the row versus column case indicates that increasing the spacing helps the row case more when the spacing are smaller, i. e. the curves are less steep in the low spacing region and become steeper at larger spacing.

Things are quite different in the two-dimension case shown in Fig. 8. In general, the system cannot achieve the contrast of the one-dimensional case as a function of the spacing. However, note that the total number of scans needs to be multiplied by 290 in the one dimension case because the system records data on a line-by-line basis. The two-dimensional case can barely achieve adequate contrast at two attenuation lengths without having spacing between the beams. However, at (Δsx, Δsy) = (2,2), the three-attenuation length image becomes close to discernable. Increasing the spacing to (Δsx, Δsy) = (25,25) results in an excellent contrast value of 0.2.

In comparing the two cases, it appears that when the contrast is poor, as in the two-dimensional and one-dimensional row case, the initial increase in spacing can make a substantial difference. A region where the increase in spacing has less impact then follows this. Presumably, when the beams get farther apart than the main lobes of the impulse response, as clarified in the next section, increased spacing makes little difference.

B. Resolution

A set of normalized impulse response curves for the one-dimensional system at (Δsx, Δsy) = (1,4,25,100,291) and five attenuation lengths is shown in Fig. 9 . The figure illustrates these impulse response functions can have quite complicated structure as a function of the spacing between beams.

 figure: Fig. 9

Fig. 9 Impulse response to a numerical delta function located at the center of the image for the 1-dimensional case as a function of spacing.

Download Full Size | PDF

In all cases, the impulse response functions have a narrow, central lobe that is superimposed on a much wider, Gaussian-like function. Interestingly, the effect of the increased spacing is to reduce the height and width of the wider lobe relative to the central, narrow one. Several ripple structures can be seen in the functions that are related to the spacing between the beams. At the largest spacing, where scanning is done by a single beam, there are no ripple-like structures. The reduction in the height and width of the Gaussian-like impulse response results in less cross talk between points in the final image. In addition, a close inspection of the central region where the narrow beam is located reveals that the base of this narrow peak is identical in all the functions. This increases the resolution in the images as a function of spacing, resulting in higher resolution for the larger spaced images.

4. Discussion and conclusions

In this article, the use of structured illumination for increasing the resolution of underwater optical images is proposed. In order to explore the concept, a set of computer programs was employed to simulate both one and two-dimension scanned images. Overall results support the idea that contrast can be increased using this method when compared with more traditional methods that illuminate either an area or a line. Significant increases in contrast can be obtained for the two dimensional case and also in the row direction, parallel to the incident illumination, for the one dimension case. Here, several issues, pragmatic and theoretical, related to the concept, implementation, and results will be discussed.

As one important issue, the amount of light that is incident on the target in each beam needs to be considered. As proposed in Fig. 2, a single DLP is used to project the beams so that the laser light power is apportioned over the entire mirror array. If most of the mirrors are “off”, as would be the case when there is wide spacing between beams, this results in extremely inefficient use of the power. So, for example, if the spacing between the beams is (25, 25) only 1/525 of the power is used. A practical implementation of the ideas considered here that uses the DLP architecture is therefore likely restricted to small spacing. As one way around this, the use of coherent focusing methods, such as diffraction gratings, does not suffer the same effect. Perhaps a future use of the system with large spacing can employ a scanned diffraction grating or other coherent focusing technique. Nevertheless, the advantages of such a system and the advent of new hardware to pursue this imaging mode are motivated by the work here.

One issue that certainly merits additional work is the pursuit of better spacing patterns. Here, only one and two-dimensional equally spaced grids have been considered. As noted, the leakage from neighboring beams is greater in the center of the image than the edges. In order to ameliorate this effect, using non-uniform patterns is an interesting option. Spacing the spots farther apart in the interior of the imaging volume while making them denser at the periphery is one way to minimize this effect. In addition, other types of regular patterns might result in better images. As one example, hexagonally spaced spots result in efficient tiling of the plane [15].

One advantage of having programmable patterns is that the system can be made adaptive. As considered above, computation of the calibration images requires knowledge of the PSF. As envisioned, the impulse response of the media can be measured via the projection and subsequent analysis of a single beam. This information could then be used as input to an adaptive algorithm that would modify the incident beam pattern. In relatively clear water, line or area illumination could be used. As water turbidity increases and the impulse response function becomes broader, additional spacing between beams could be used to obtain higher contrast and resolution. The system would therefore sacrifice scan speed in order to obtain better images in more turbid water.

As explored here, the use of structured illumination in a challenging underwater environment presents a host of options in achieving better images in both one and two-dimensional modes. Various issues related to the efficient use of light and the complexity of the hardware remain to be explored in order to accomplish a pragmatic implement of the technique. Based on the results considered here, the relative advantages of increased capability weigh in favor of advanced pursuit. Perhaps future generations of underwater imaging systems will make use of these ideas as continuing strides in optical hardware avail new opportunities to those working in this challenging environment.

Acknowledgements

The author would like to thank the Office of Naval Research division of Environmental Ocean Optics for supporting this research.

References and links

1. S. Q. Duntley, “Light in the Sea,” J. Opt. Soc. Am. 53(2), 214 (1963). [CrossRef]  

2. C. D. Mobley, Light and Water: Radiative Transfer in Natural Waters, (Academic Press, 1994).

3. K. S. Shifrin, Physical Optics of Ocean Water, (American Institute of Physics, 1988).

4. J. S. Jaffe, K. D. Moore, J. McLean, and M. P. Strand, “Underwater Optical Imaging: Status and Prospects,” Oceanography (Wash. D.C.) 14, 64–75 (2001).

5. J. S. Jaffe, “Performance bounds on synchronous laser line scan systems,” Opt. Express 13(3), 738–748 (2005). [CrossRef]   [PubMed]  

6. M. A. Neil, R. Juskaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef]  

7. J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging”, IEEE. J. Ocean Eng. 15(2), 101–111 (1990). [CrossRef]  

8. J. S. Jaffe, “Monte Carlo modeling of underwater-image formation: validity of the linear and small-scale approximations,” Appl. Opt. 34(24), 5413–5421 (1995). [CrossRef]   [PubMed]  

9. B. J. McGlamery, B., “A computer model for underwater camera systems” in Ocean Optics VI, S. Q. Duntley, Ed., SPIE, 28, (1979).

10. E. P. Zege, A. P. Ivanov, and I. L. Katsev, Image Transfer Through a Scattering Medium, (Springer Verlag, Heidelberg, 1991).

11. E. P. Zege, I. L. Katsev, and I. N. Polonsky, “Multicomponent approach to light propagation in clouds and mists,” Appl. Opt. 32(15), 2803–2812 (1993). [CrossRef]   [PubMed]  

12. I. L. Katsev, E. P. Zege, A. S. Prikhach, and I. N. Polonsky, “Efficient technique to determine backscattered light power for various atmospheric and oceanic sounding and imaging systems,” J. Opt. Soc. Am. 14(6), 1338–1346 (1997). [CrossRef]  

13. E. P. Zege, I. L. Katsev, and I. N. Polonsky, “Analytical solution to lidar return signals from clouds with regard to Multiple Scattering,” Appl. Phys. B 60(4), 345–353 (1995). [CrossRef]  

14. T. E. Giddings and J. J. Shirron, “Numerical simulation of the incoherent electro-optical imaging process in plane stratified media,” Opt. Eng. 48(12), 1–13 (2009). [CrossRef]  

15. R. M. Mersereau, “The processing of hexagonally sampled two-dimensional signals,” Proc. IEEE 67(6), 930–949 (1979). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Resultant images for different illumination strategies. (a) An area. (b) A line scanned in one-dimension perpendicular to the line. (c) A point scanned in 2-dimensions.
Fig. 2
Fig. 2 A block diagram of the proposed hardware configuration
Fig. 3
Fig. 3 Point Spread Functions for the simulations as imaged by the camera. The blue curve is the light incident on the target from a narrow, one mrad., projected beam. The red curve is the scattered light after reflection. The sum of the two quantities is the PSF. (a) one attenuation length (b) 3 attenuation lengths (c) 5 attenuation lengths (d) 7 attenuation lengths. Vertical units are watts/m2 incident on the camera. Horizontal units are view 4.1 mrad. / pixel.
Fig. 4
Fig. 4 The sampling geometry embodied in Eq. (5). The figure illustrates a two-dimensional field of delta functions with spacing ΔSx in the x direction and ΔSy in the y direction with. Note that here Δx = 1 and Δy = 1 as the delta function grid is shifted by one unit in each direction.
Fig. 5
Fig. 5 Several before and after images that show the results of the normalization. (a) The resulting image from (Δsx = 25); (b) After normalization; (c) Resulting image from (Δsx Δsy) = (25,25); (d) After normalization.
Fig. 6
Fig. 6 A graph of the values across a row for the radial spoke image at (Δsx = 25) as a function of attenuation length.
Fig. 7
Fig. 7 Contour diagrams for the (a) row and (b) column contrast values.
Fig. 8
Fig. 8 The contour diagram for the two-dimensional contrast values
Fig. 9
Fig. 9 Impulse response to a numerical delta function located at the center of the image for the 1-dimensional case as a function of spacing.

Tables (1)

Tables Icon

Table 1 Parameters used in the simulation to determine the Point Spread Function

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I ( r ) = I ( r 0 ) e c ( r r 0 ) = I ( r o ) e ( a + b ) ( r r o ) .
b = 4 π b ( θ ) d ω .
1 4 π 4 π β ( θ ) d ω = 1.
I ( x , y ) = w R ( x 2 , y 2 ) p s f ( x 1 , y 1 , x 2 , y 2 : d ) p s f ( x 0 , y 0 , x 1 , y 1 : d ) w T ( x 0 , y 0 ) .....
T ( x - x 1 , y - y 1 ) d x 0 d y 0 d x 1 d y 1 d x 2 d y 2 .
w t = w r = Π 2 ( Δ x , Δ y , Δ s x , Δ s y ) = m = 0 m max 1 n = 0 n max 1 δ ( x ' Δ x n Δ s x , y ' Δ y m Δ s y ) .
w t = w r = Π 2 ( Δ x , Δ s x ) = δ ( x ' Δ x n Δ s x )
I 2 ( x , y ) = Δ y = 0 ( m 1 ) Δ s y Δ x = 0 ( n 1 ) Δ s x I ( Δ x + m Δ s x , Δ y + n Δ s y ) .
I 1 ( x , y ) = Δ x = 0 ( n 1 ) Δ s x I ( Δ x + m Δ s x | y ) .
I 1 ( x , y ) = y min y max Δ x = 0 ( n 1 ) Δ s x I ( Δ x + m Δ s x | y ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.