Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-aperture system approach for snapshot multispectral imaging applications

Open Access Open Access

Abstract

We present an ultra-compact system approach for snapshot, multispectral imaging. It is based on a slanted linear variable spectral filter mounted in close proximity to the entrance pupil of a micro-optical, multi-aperture imaging system. A compact demonstration setup with a size of only 60 × 60 × 28 mm3 is developed, which enables the acquisition of 66 spectral channels in a single shot and offers a linear spectral sampling of approximately six nanometers over an extended wavelength range of 450-850 nm. The spatial sampling of each channel covers up to 400 × 400 pixels. First, the concept, the optical design and the fabrication are detailed. After the optical performance characterization, a comprehensive calibration strategy is developed and applied. An experimental demonstration is performed by acquiring the spatial and the spectral information of an imaged test scene.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Multi- and hyperspectral imaging, which refer to the recording of two-dimensionally resolved spectral information, are currently experiencing a rapid spreading in a broad variety of application fields, including the agri-food sector [13], microscopy [4,5], environmental monitoring [6,7], and military applications [8]. In addition to the possibility to gather image and spectral information simultaneously, multi- and hyperspectral imaging offer the advantage to be a non-invasive and contact-free analysis tool. A number of different spectral imaging techniques have been developed in the past few years to meet the individual requirements of each specific application. In spot- or line-scanning techniques, such as ‘whiskbroom-‘ or ‘pushbroom-‘systems, the observed scene is scanned point by point or line by line, while simultaneously the spectral information is recorded. Especially for high speed applications, these techniques suffer from motion blur and show a reduced signal-to-noise ratio (SNR). In filter-wheel based systems [9], a number of images is subsequently acquired using different spectral filters, respectively. In general, this approach offers images with high lateral resolution. Recently, Xu et. al [10] introduced a hyperspectral stripe projector based on a digital micro-mirror device in combination with a consecutively scanning grayscale detector for simultaneous 3D spatial and hyperspectral imaging. However, both approaches are only appropriate for static or slowly changing scenes. So called ‘snapshot’ imaging systems allow for recording the lateral and the spectral object information in a single acquisition and create a complete three-dimensional, hyperspectral cube in one step. To this end, the spectral imaging data is multiplexed onto a single frame of an image sensor. Hereby, scanning artifacts are avoided and the temporal resolution is increased compared to the techniques mentioned above, which increases the number of the application domains.

A broad variety of different technologies have been developed for ‘snapshot’ spectral imaging over the last years and decades. The challenge is to find a suitable balance between spectral and lateral resolution for each specific application, while taking the instrument’s complexity as well as the need of post-processing into account. For example, image mapping spectrometers (IMS) [11] and integral field spectrometers (IFS) [12] rely on the segmentation of the image of a scene by slicing mirrors, fiber bundles or lenslet arrays. They necessitate a proper pupil treatment and finally a reimaging onto a two-dimensional detector. Drawbacks of these techniques include the need for a high precision image slicer and the complexity of the following optical path to ensure a proper pupil treatment and spectral decomposition. A series of beamsplitting polarizers and waveplates are employed in image-replicating imaging spectrometer (IRIS) [13] to create a set of spectrally separated images on a two-dimensional detector. Due to the complex and elaborate setup, the IRIS concept is practically limited to about 16 channels [14]. Computer generated holograms (CGHs) are used in Computer Tomography Imaging Spectrometers (CTIS) [15] to generate 3 × 3 or 5 × 5 sheared spectral images on the detector. In addition to the sophisticated optical setup, an extensive computational post-processing of the captured image is necessary to reconstruct the three-dimensional data cube. A detailed comparison of ‘snapshot’ spectral imaging approaches is found in [16], which provides further concepts. In summary, the complex optical path and the bulky system size constitute major drawbacks of current systems. A minor drawback is a sophisticated post-processing in some systems to reconstruct the spectral information of the captured data, which do not allow for real-time online observation.

In this contribution, we present an alternative concept in order to address applications of multi-spectral imaging systems where a high compactness and a snapshot acquisition is of particular importance. These applications may include precision agricultural monitoring or environmental surveillance based on low-altitude vehicles such as drones. To this end, we show the development of a multispectral snapshot camera based on a micro-optical, multi-aperture approach combined with a slanted linear variable spectral filter [17]. In the first part, the optical design and working principle of the system are detailed. This is followed by the description of the system setup with the focus on the fabrication of the essential optical elements, the system’s assembly, the alignment and the system’s characterization and calibration. Finally, we present the extraction of the spectral information of an extended scene in order to experimentally demonstrate the capabilities of the developed system.

2. System concept

The proposed multi-spectral imaging concept combines a multi-aperture, micro-optical imaging system with a linear variable spectral filter (LVF). Each single micro-optical imaging unit, further referred to as channel, images the entire object distribution of interest onto the image sensor as seen in Fig. 1. Yet, each channel detects only a certain spectral bandwidth of the object, which is depicted in Fig. 1 using three different color images. Note that the object distance is considerably larger than the focal length of the imaging system. A schematic cross-section of the imaging system is shown in Fig. 2(a) representing an example of three channels of the multi-aperture imaging system. The multi-aperture imaging system comprises a stack of several micro-structured optical layers integrated close to an image sensor. Specifically, a LVF is positioned at the top of the stack, which is followed by a glass substrate equipped with an aperture array on its top and a matching micro-lens array (MLA) at its bottom surface. A customized baffle array and finally a cover glass are located at the lower part of the stack. For large object distances, as default in remote sensing applications, each micro-lens of a specific channel images the same field of view (FOV) of the scene onto an allocated area of the detector. Due to the linear variable filter, only a narrow spectral bandwidth is passing through each channel. As a result, a series of images of the same scene is captured, whereas the individual images differ from each other by the spectral content. The transmittance wavelength of the LVF (“Bifrost – Continuously Variable Bandpass Filter” (LF103245); Delta Optical Thin Film A/S) [18] varies from 450–880 nm in one direction within 36 mm and the spectral bandwidth is specified with two percent of the center wavelength. The wavelength characteristic of the filter remains constant in the perpendicular direction. The wavelength variation of the filter covers a feasible range for VIS and NIR spectral imaging. The geometric dimensions of the filter are well suited to cover a full-frame image sensor with an active pixel area of 24 × 36 mm2. Accordingly, we have chosen the KAI-16000 CCD image sensor from ON Semiconductor with approximately 16MP (4872 (H) x 3248 (V)) total pixel count and a square pixel pitch of 7.4 µm.

 figure: Fig. 1.

Fig. 1. Working principle of the proposed multispectral imaging system capturing an object in remote sensing. Each channel captures only a certain spectral part of the object.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. (a) Cross-section of the optical system design principle including the linear variable filter (LVF), the micro-lens array (MLA), the baffle array and the image sensor with cover glass. Each lens of the MLA images the entire field-of-view (FOV) within a specific spectral range onto the image sensor. The baffle array prevents crosstalk between neighboring channels. (b) MTF diagrams for three different wavelengths (450 nm, 665 nm, 880 nm) and two different field positions – solid line for 0° and dashed line for 21°.

Download Full Size | PDF

Similar multispectral imaging systems that rely on a large image sensor format have been proposed by Renhorn et. al [19], Mu et. al [20] and Meng et. al [21]. However, our imaging system approach does not require a main objective lens and, therefore, enables a higher compactness and lower weight as well as at the same time. In contrast to the system developed in Ref. [19], which utilizes a scanning approach, our system allows for the retrieval of multi-spectral object information based on a single acquisition.

2.1 Optical design

A single optical channel is initially considered in order to optimize the optical parameters of the imaging system. The design parameters of the one-sided micro-lens include the aperture diameter, the lens radius of curvature, the glass substrate thickness and the spacing distance of the micro-lens array. The selection of these parameters is based on finding a compromise for the system’s f-number (f/#) over a reasonable field of view. If the f-number is large, then the optical aberrations are negligible and do not compromise the imaging performance. If the f-number is small, then the system provides a higher light sensitivity, however, it is subject to larger aberrations that reduce the imaging performance. The contrast of the imaging system characterized by the modulation transfer function (MTF) at a distinct spatial frequency is used to quantify and ultimately optimize its imaging performance. In particular, we optimize the MTF at a spatial frequency ${\nu _o} = 0.5\; \cdot {\nu _{Ny}}$, where νNy is the Nyquist frequency calculated by ${\nu _{Ny}} = 1/({2{p_{px}}} )$ using the pixel pitch ${p_{px}}$ of the image sensor [22]. Minimizing the f-number for optimum light sensitivity and, at the same time, maintaining MTF>0.5 at ${\nu _o}$ provides the main figure of merit of the optimization process over the extended transmittance wavelength range from 450 nm to 880 nm of the LVF. The radius of curvature and sag height of the lenslets of the micro-lens array are restricted by boundary conditions given by the wafer level manufacturing process.

The behavior of the simulated MTF is shown in Fig. 2(b) with respect to the spatial frequencies up to the Nyquist frequency of approximately 67 LP/mm for three spectral channels and two field positions, respectively. The solid lines refer to an on-axis field point and the dashed lines to an outer field point of 21°. Due to the chromatic focal shift of the singlet lens, the MTF behavior differs for different wavelengths as the back focal distance is optimized for the 650 nm channel. Furthermore, the MTF is decreased for the maximum, off-axis field due to the field curvature of the optical system featuring a single positive lens. However, the contrast is larger than the anticipated 0.5 at Nyquist-half frequency over nearly the entire FOV and the considered spectral range.

2.2 Orientation of the LVF

The LVF consists of a UV graded fused silica substrate with a bandpass filter on one side and a dielectric interference filter wedge on the other side. The bandpass filter attenuates the light between 200 nm and 1150 nm with an optical density $OD \ge 4$. In the wavelength range between 450 nm and 880 nm the wavelength selective transmittance varies linearly along the axial direction of the filter due to the filter wedge. At each filter position, light is transmitted for normal incidence with a bandwidth (FWHM) of two percent around the specific center wavelength. The position of the LVF in close proximity to the aperture plane of the micro-lens array and the finite size of each aperture in the aperture array restricts the detected wavelength range for each optical channel. An alternative position of the LVF directly in front of the image sensor was previously used in Ref. [19] for scanning hyperspectral cameras and recently in Ref. [20]. The full opening angle of the conical ray bundle is approximately 7° for the large f/8 as described in Ref. [20], in contrast, the full opening angle is approximately 20.5° for the small f/2.8 as stated in Ref. [19]. This configuration leads to a larger spectral bandwidth received by each pixel due to the extended angle distribution. Additionally, Mu et. al [20] requires a quadratic field stop from the main objective lens in order to confine the sub-images and placing the LVF close to the image sensor. The confinement in our imaging system is realized using our customized 3D baffle structure while the requirement for a main objective lens is omitted.

The orientation of the LVF with respect to the multi-aperture imaging system needs to be optimized in order to obtain as many wavelength-different sub-images as possible. In case the LVF’s outer edges are aligned in parallel to the MLA grid axes, each channel of a specific MLA column would be oriented perpendicular with respect to the varying filter direction and would detect the same spectral information (see Fig. 3 (a)). In order to increase the spectral sampling across the wavelength range, the LVF is slightly tilted with respect to the micro-lens array, as shown in Fig. 3(b). A uniform and linear spectral sampling can be achieved by adjusting pre-defined tilt angle $\alpha $. This angle $\alpha $ depends on the number of channels n in y-direction and the channel pitches ${p_x}$ and ${p_y}$ of the micro-lenses in x and y, respectively, according to

$$\alpha = \arctan \left( {\frac{{{p_x}}}{{n\cdot{p_y}}}} \right).$$
The sub images yield the highest fill-factor on the image sensor when ${p_x}$ and ${p_y}$ are equal assuming the image circle is confined by a square. Thus, Eq. (1) can be simplified to:
$$\alpha = \arctan \left( {\frac{1}{n}} \right).$$

 figure: Fig. 3.

Fig. 3. (a) MLA with micro-lens pitches px and py oriented parallel to the outer edges of the LVF. Each channel in the same column of the MLA detects the same spectral information. The single channels are indicated by their position in x-direction from -5 to 5 and in y-direction from -2 to 3. The central channel is denoted as (0;0). (b) The LVF is tilted by approximately 9.5° w.r.t. the MLA in order to achieve a linear spectral sampling between the neighboring channels.

Download Full Size | PDF

In order to achieve an appropriate compromise between spectral sampling and high spatial resolution over the extended wavelength range and the size constraints of the LVF, we utilize an array of 11 × 6 micro-lens channels. According to Eq. (2), the tilt angle $\alpha \; $ between LVF and MLA is approximately 9.5° for $n = 6$ channels in one column.

Finally, the customized baffle structure shown in Fig. 2(a) corresponds to a three-dimensional aperture array with circular cross-sections towards the MLA and quadratic cross-sections towards the image sensor. It serves as a field stop that limits the FOV of each spectral channel on the image sensor and its shape is optimized for providing a maximum fill-factor. Furthermore, the baffle structure suppresses optical crosstalk between neighboring channels, which otherwise would result in undesired straylight paths and reduced image contrast. The limited physical size and the exclusion of the boundary areas of the LVF reduces the effective pixel area of the image sensor to 10.8MP. Due to the cover glass of the image sensor, the baffle structure height and the wall thickness is restricted to achieve a relative illumination of zero between neighboring channels. This might be improved by using image sensors without a cover glass.

2.3 Final design parameters

The final design parameters of the optimized micro-lens setup and baffle array are shown in Table. 1. The system has a total track length of only 7.2 mm. The imaging system offers an f-number of f/7, which results from the trade-off between minimizing aberrations over the full diagonal FOV of 68° and maximizing the systems light sensitivity. The angular sampling in object space is about 0.12° according to the focal length of 3.65 mm of the imaging system. Each channel provides a spatial sampling of approximately 400 × 400 pixels. This spatial resolution is comparable to the one in Ref. [20], but less than in Ref. [21]. Furthermore, it is significantly less than in Ref. [19] addressing approx. the full spatial resolution of 22MP of their focal plane array. However, this system approach is not snapshot compatible.

Tables Icon

Table 1. Final design parameters of the MLA and baffle array of the multispectral imaging system

2.4 Dependency of transmitted filter wavelength from incidence position and angle

In the next step, we investigate the spectral bandwidth passing through each specific channel. The ray bundle emerging from each field position of the observed scene impinges onto the LVF at a different position and with a different field angle. The specific conditions of the upper part of a single optical channel are illustrated schematically in Fig. 4. The central ray bundle hits the LVF perpendicularly and the corresponding chief ray is a straight line pointing into the center of the aperture. The aperture diameter is denoted as ${D_{AP}}$ and ${\lambda _c}(0^\circ )$ denotes the central wavelength passing the filter at this local position and for the perpendicular incidence angle. The ray bundles from the outermost field positions of the scene are described by the field angle ${\theta _r}$, which is indicated equivalently to the angle of incidence (AOI) on the filter. In order to pass through the aperture, the respective chief ray of the ray bundle intersects the surfaces of the LVF at shifted positions compared to the perpendicular incidence. The shift distance on the lower surface, which contains the interference filter wedge, is denoted as ds.

 figure: Fig. 4.

Fig. 4. (a) 2D Scheme of the angular dependent wavelength shift of the central wavelength transmitting through the LVF and the entrance pupil DAP of the MLA. (b) 3D Scheme of the oblique chief ray hitting the LVF, where the radial and axial incident angles on the surface of the LVF are marked.

Download Full Size | PDF

Both, the different intersection position as well as the different incidence angle cause a change in the transmitted wavelength for the off-axis ray bundle. In particular, the LVF is changing its transmittance wavelength with a length-dispersion ${\delta _{LVF}} = 13.1\;{{nm} / {mm}}$ in x-direction for a perpendicular incidence. Note that the transmittance wavelength in y-direction remains constant. In addition, changing the incident angle shifts the transmitted wavelength ${\lambda _c}(\theta )$ in an equivalent manner, which can be quantified by [23]:

$${\lambda _c}(\theta ) = {\lambda _c}(0^\circ )\cos \left( {\frac{\theta }{{{n_{eff}}}}} \right).$$
Here, ${n_{eff}}$ is the effective refractive index of the LVF, which is provided by the manufacturer and equals 1.7648. The transmittance wavelength ${\lambda _c}({\theta _r})$ can be calculated for any field position indicated by the incidence angle ${\theta _r}$ by combining both aspects: the shift of the incidence position and the change of the incidence angle. Accordingly, ${\lambda _c}({\theta _r})$ can be expressed as
$${\lambda _c}({\theta _r}) = ({\lambda _c}(0^\circ ) + {\delta _{LVF}} \cdot {d_s}({\theta _x})) \cdot \cos \left( {\frac{{{\theta_r}}}{{{n_{eff}}}}} \right),$$
where,
$${\theta _r} = \arctan \left( {\sqrt {\tan {{({\theta_x})}^2} + \tan {{({\theta_y})}^2}} } \right)$$
and
$${d_s}({\theta _x}) = \tan ({\theta _x}) \cdot {h_{MLA - LVF}}.$$
The angles of incidence on the LVF in radial, x- and y-direction are denoted as ${\theta _r}$, ${\theta _x}$ and ${\theta _y}$, respectively. The distance between the lower surface of the LVF and the aperture of the MLA (${h_{LVF - MLA}} = 0.3\;mm$) is described by ${h_{LVF - MLA}}$ and ${h_{LVF}}$ defines the thickness of the LVF (${h_{LVF}} = 1\;mm$). Additionally, the variation of the transmitted wavelength due to the finite aperture is calculated by $\Delta {\lambda _c} = dis{p_{LVF}} \cdot {D_{AP}} = 6.7nm$, which is independent of the center wavelength and angle of incidence. This value is smaller than the specified spectral bandwidth of the LVF (two percent of the center wavelength ${\lambda _c}({0^\circ } ))$ and, therefore, negligible.

The wavelength shift λC is shown in Fig. 5(a) as a function of the incidence angle ${\theta _r}$ according to Eq. (4) and normalized by ${\lambda _c}(0^\circ ) = 665\;nm$. The two curves illustrate the dependency in vertical direction along the y-axis (blue) and the horizontal orientation along the x-axis (red), respectively. Due to the constant wavelength transmittance of the LVF in y-direction, the blue curve shows a symmetric behavior with respect to positive and negative angles. In contrast, the red curve, indicating the horizontal orientation, is slightly shifted and exhibits an asymmetric behavior, which is a result of the linear dispersion contribution ${\delta _{LVF}}$ in Eq. (4).

 figure: Fig. 5.

Fig. 5. (a) Normalized wavelength shift ${{{\lambda _c}} / {{\lambda _c}(0^\circ )}}$ for ${\lambda _c}(0^\circ ) = 665\;nm$ dependent on the incidence angle ${\theta _r}$ in vertical direction (blue) and in horizontal direction (red). (b) 2D-representation (including tilt of the LVF) of the normalized wavelength shift ${{{\lambda _c}} / {{\lambda _c}(0^\circ )}}$ for ${\lambda _c}(0^\circ ) = 665\;nm$ dependent on all field position in the x-y-plane. The red dot marks the value for ${\theta _r} = 0^\circ $ and the red dotted concentric circles represent incidence angles of ${\theta _r} = \{{5^\circ ,10^\circ ,15^\circ ,20^\circ } \}$. The blue dot marks the maximum value in this plot.

Download Full Size | PDF

Additionally, Fig. 5(b) shows a color-coded diagram of the wavelength shift in a two-dimensional representation taking into account all field positions of the x-y-plane. Again, the values are normalized to the central wavelength of ${\lambda _c}(0^\circ ) = 665\;nm$. The red dot depicts the center of the field of view represented by an incidence angle of 0°. The red-dotted concentric circles indicate incidence angles of 5°, 10°, 15° and 20° from inside to outside, respectively. The tilt of the central image with respect to the outer axis is due to the slanted orientation of the LVF compared to the micro-lens array. The blue dot in Fig. 5(b) marks the position of the maximum wavelength corresponding to the central wavelength of 665 nm, which is slightly shifted away from the center due to Eq. (4). The distance of the actual maximum wavelength to the channel center depends on the effective refractive index and the central wavelength: the greater the refractive index or the central wavelength, the smaller is the distance to the channel center. It can be seen in Fig. 5(b) that a relative wavelength shift of up to 4% occurs within the system’s field of view, which is larger than the spectral bandwidth of the LVF (two percent of the center wavelength ${\lambda _c}({0^\circ } )$). This emphasizes the importance of a proper spectral calibration of the system across the field of view.

In addition to the conventional spectral calibration, the proposed multi-aperture, multispectral imaging configuration necessitates a tailored spatial calibration. In particular, the lateral distance between single micro-lenses introduces a depth-dependent parallax effect if an object point is observed at a finite distance. If the optical axes of two neighboring channels are aligned parallel, the apparent shift of the corresponding image point, generally referred to as an image disparity $\Delta x$, can be estimated by simple triangulation according to:

$$\Delta x = \frac{{b \cdot f}}{z}.$$
Here, b denotes the pitch between the two channels, f is the focal length of the micro-lenses and z is the distance from the object point to the image sensor plane. Due to the small focal length of 3.65 mm and the high f\7, the sub-images of the channels provide an almost constant MTF performance for object distances larger than 130 mm. Accordingly, a refocusing of the optics is not needed. However, the disparity is depth-dependent and needs to be calibrated in order to match the spectral information for a certain object point. Consequently, a calibration procedure needs to be developed that determines the center positions of each channel in the imaging system dependent on the object distance. The corresponding routine is discussed in Sec. 3.3. This works well for planar scenes, however, the routing for 3D scenes should be addressed in future work.

3. System set-up

3.1 Element fabrication and assembly

The MLA is fabricated by state of the art wafer-level-optics manufacturing technologies [24,25]. The utilized approach allows for implementing micro-optical systems that comprise micro-lenses arranged in an extended array with diameters and sag heights in the range of hundreds of micrometers. A BOROFLOAT 33 glass wafer with a diameter of 4” is processed using UV lithography and a subsequent reflow process in order to generate the spherical micro-lenses array master. The final glass-wafer is equipped with a ‘black chrome’ aperture array on the front side and a highly absorbing (black-matrix) polymer on the back side. The final lens elements are fabricated inside the polymer OrmoComp by an UV molding process on the back side of the wafer substrate. Finally, a single array with 11 × 6 channels is diced using a chip saw.

The customized baffle array is fabricated by initially drilling small, cylindrical holes in an aluminum alloy plate. The aperture diameter and the pitch of the holes are matching the micro-lens apertures and pitches in the MLA. Subsequently, the designed square footprint on the backside of the baffle array is manufactured by an additional milling step. This step enlarges and reshapes the openings on the back side of the baffle plate while keeping the apertures on the front side circular. Finally, the baffle structure is black anodized and directly bonded to the micro-lens array.

3.2 Alignment and system characterization

The KAI-16000 CCD image sensor with a housing including a cooling structure is shown in Fig. 6(a). The complete optical stack consisting of the LVF, the MLA, and the baffle array is fixed into a mechanical holder frame by using adjustment notches (see Fig. 6(b)). Due to relative large mechanical tolerances of the packaging, the frame has to be actively aligned in z-direction with respect to the CCD image sensor. The frame height is adjusted above the image sensor by using mechanical spacer elements with tailored thicknesses, while the image quality of a captured test target is analyzed using the modulation transfer function (MTF). The alignment target is an optimized channel (0;0) with a maximized MTF. The aligned and completed system is shown in Fig. 6(c) with the inclined LVF and the FireWire connection.

 figure: Fig. 6.

Fig. 6. (a) Photograph of the KAI-16000 CCD image sensor included in the mechanical housing and (b) the multispectral micro-optical unit from the bottom showing the squared shaped holes of the baffle array. (c) Size comparison of the complete multispectral camera with a 2-Euro-coin. The size of the multispectral camera is only 60 × 60 × 28 mm3 and weights only 200g.

Download Full Size | PDF

In order to assess the imaging quality of the multispectral camera, the modulation transfer function (MTF) of three different channels is analyzed by an approach described in Ref. [26], which is similar to ISO 12233 [27]. To this end, a screen is imaged at a distance of approximately 1 m that provides a slanted edge contrast feature. Each channel’s image of the slanted edge is evaluated in the central (“on-axis”) region of the field of view, respectively. The MTF performance of the micro-optical imaging system is shown in Fig. 7 for three different channels namely for a blue (475 nm), a red (665 nm), and an infrared channel (835 nm) up to the Nyquist frequency of approximately 67 LP/mm. Additionally, the theoretical MTF values of the design are plotted with dashed lines, respectively.

 figure: Fig. 7.

Fig. 7. On-axis MTF plot for three channels representing different spectral channels (channel (-5;0) – 835 nm, channel (0;0) – 665 nm, channel (5;0) – 475 nm). The dashed lines correspond to the simulated MTF of the channels, respectively.

Download Full Size | PDF

The measured MTF values of the three different channels are in agreement with the theoretical values from the simulation. The contrast for all channels is larger than 0.5 at Nyquist-half frequency ${\nu _0}$ as intended in the design, which indicates the suitability of the manufacturing process and the properly aligned optical stack. The Nyquist-half frequency of approximately 33.5 LP/mm corresponds to an object space resolution of 0.12 LP/mm at 1 m object distance. Therefore, two object points with a distance between of approx. 4 mm are resolvable with a contrast larger than 0.5. The MTF of the red and infrared channel show an almost equal behavior. The blue channel shows a slightly lower contrast due to the chromatic focal shift.

3.3 System calibration

The presented snapshot multi-aperture imaging system allows the recording of an object scene at 66 different wavelengths simultaneously. Two distinct calibration steps are required in order to extract and analyze the spatially resolved spectral information from an extended object scene correctly and to derive an appropriate multispectral data cube from the measurements. In particular, a spatial and spectral calibration are used to compensate for the object distance dependent distortions of the MLA and the field dependent spectral transmission properties of each pixel, respectively. In addition, general calibration procedures have to be applied for measurements including a white reference correction and a dark noise subtraction. This will be discussed in Sec. 4 in more detail.

In order to determine the relative displacement of the spatial coordinate system of each sub-image, a point source is imaged at various distances by all 66 channels. For each sub-image, the center of gravity of the spot image is determined individually. These results are used as the origin of the local coordinate systems for each sub-image. In order to generate a multispectral data cube, all 66 sub-images are superposed according their individual coordinate centers. The simulated and measured disparity values are shown in Fig. 8(a) as a function of the object distance for two representative cases. The green and blue curve corresponds to the disparity between two adjacent channels and between the central and the outermost channel, respectively. The simulation is based on simple triangulation, assuming parallel oriented optical axis of all channels and the overall optical axis of the micro-lens system passing through the center position of the 66 channel-array.

 figure: Fig. 8.

Fig. 8. (a) Simulated and measured disparity values as a function of the object distance for two representative cases: the disparity between channel (0;0) and the adjacent channel (0;-1) is shown in green color; the disparity between channel (0;0) and the outermost channel (-5;3) is shown in blue color. (b) shows the disparity of each channel relative to the central channel (0;0) at an object distance of 1750 mm.

Download Full Size | PDF

The results of the disparity measurement show that the disparity between the coordinate systems of two neighboring channels is smaller than the detectors pixel size (7.4 µm) for object distances larger than approximately 1.5 m. The image disparity between the central channel and the outermost channel is smaller than the pixel resolution of the detector, if the object distance is larger than 8.5 m. The image in Fig. 8 (b) shows a two-dimensional and color-coded representation of the disparity of all channel origins with respect to the origin position of the central channel for an object distance of 1.75 m. The disparity increases with channel positions further away from the center and almost symmetric around the central channel with an error of less than one pixel. However, the calibration results for shorter distances below 1 m can be used to correctly overlay all channels, if the extended scene is planar, due to the fact that the images are still in focus for object distances larger than 13 cm.

The goal of the next calibration step is to compensate the field dependent spectral transmission properties of each channel. Consequently, the spectral response of each pixel in each channel is calibrated in order to account for the shift to shorter wavelength with increasing field position as explained in Sec. 2.4.

The calibration is based on placing a diffuse transmission screen directly in front of the multi-spectral imaging system, which is illuminated by a monochrome and spatially homogenized light source. The light source utilizes a tunable monochromator system, which offers an overall spectral range in the visible and near-infrared wavelength range. The emitted wavelength can be tuned with a step size of 3 nm, which is smaller than the spectral resolution of the spectral sensor of approximately 10 nm. The output of the monochromator is homogenized by a fly’s eye condenser and projected on a diffuse screen. The image of the illuminated screen is shown in Fig. 9 taken by a luminance camera. The red rectangle indicates the part of the screen that is recorded by all 66 channels of the multi-spectral camera in close proximity to the screen. The image shows an intensity distribution with a maximum in the center and a decay to 93% to the outer region of interest indicated by the red rectangle. The measured intensity distribution is a verification of the homogeneity of the illuminated screen in order to provide the same amount of light to each individual channel, respectively. An almost constant dynamic is assumed for the following pixel-wise spectral calibration due to the marginal variation over the screen. The calibration process is conducted by tuning the monochromatic light source within a spectral range of 430-871 nm while capturing an image of the screen for each 3 nm step and keeping the integration time constant.

 figure: Fig. 9.

Fig. 9. Luminance camera image of the diffusing screen in front of the multi-aperture camera. The illuminated area of the screen has a diameter of approximately 60 mm. The red rectangle shows the size of the field that is recorded by all 66 channels of the multi-spectral camera.

Download Full Size | PDF

Thus, the spectral response is measured as an intensity signal for each pixel dependent on the wavelength. As an example, the normalized intensity profiles for three pixels in the center of the channels (-5;0), (0;0) and (5;0) are shown in Fig. 10(a) – (c) in the upper part. Afterwards, the measured data is fitted for each pixel by a Gaussian distribution, which defines a central wavelength and a bandwidth (FWHM: full-width of half maximum) for the specific pixel. This procedure allows for the derivation of an extensive look-up-table, which allocates a specific central wavelength to each pixel. The results of the spectral pixel calibration are shown in Fig. 10(a) – (c) in the lower part for the three channels (-5;0), (0;0) and (5;0). Each sub-figure displays the fitted central wavelength λf for all pixel positions of each channel in a color coded plot with 405 × 405 pixels. The maximum wavelengths in the Fig. 10(a) – (c) are 834.4 nm, 665.4 nm and 474.2 nm, respectively. Each plot clearly shows the decay of the transmitted wavelength from the center to the edge of each sub-image as expected by the oblique incidence angle onto the LVF for off-axis field positions. The blue dots mark the position of the maximum wavelength for each channel. The small shift compared to the geometrical center is in accordance with the LVF characteristic based on Eq. (4) discussed in Sec. 2.4. The rounded corners refer to the spatial boundaries of the baffle array in each diagram. Additionally, pixels related to wavelengths smaller than 450 nm detect no intensity, so that they appear in dark blue in the diagram (left peripheral region of Fig. 10(c)). The cut-off wavelength is attributed to the low SNR due to the reduced light intensity of the monochromator as well as the reduced detector efficiency in combination with the low filter transmittance.

 figure: Fig. 10.

Fig. 10. The normalized intensity response w.r.t. the wavelength is shown for three pixels in the center of the channels (-5;0), (0;0) and (5;0) as well as the fit of a Gaussian distribution to the measured data in the upper part. Results of the spectral calibration for three different channels with 834.4 nm (a), 665.4 nm (b) and 474.2 nm (c) maximum wavelength, respectively. The blue dot marks the position of the maximum wavelength position in each channel.

Download Full Size | PDF

 Figure 11 shows the calibrated wavelengths as cross-sections from Fig. 10, normalized to the corresponding maximum wavelength. In particular, Fig. 11(a) and (b) depict cross-sections of the dependency on the incidence angles in ${\theta _x}$- and ${\theta _y}$-direction, respectively. All curves pass through the center position of their specific channel. The graphs along the ${\theta _x}$-direction show a slightly non-symmetric behavior which can be attributed to the linear dispersion of the LVF. In contrast, the graphs in ${\theta _y}$-direction are symmetric due to the negligible linear term in Eq. (4).

 figure: Fig. 11.

Fig. 11. Plot of fitted wavelength normalized to the maximum wavelength 834.4 nm, 665.4 nm and 474.2 nm over the field angle (scan through the center) in x- and y-direction shown in the left and right plot, respectively.

Download Full Size | PDF

Finally, Fig. 12(a) shows the fitted central wavelength as a function of the channel number for two different pixels, representing the fields $[{{\theta_x},{\theta_y}} ]= [{0^\circ ,0^\circ } ]$ and $[{{\theta_x},{\theta_y}} ]= [{18^\circ ,18^\circ } ]$. Here, the channels are sorted according to their maximum transmission wavelength, where channel 1 is associated with the smallest wavelength and channel 66 is associated with the largest wavelength. A close to linear dependency is observed for both field positions, which is in agreement with the optimized LVF slanting angle $\alpha $ selected in the design process. The solid lines depict the results of the corresponding linear fits, which exhibit linear regression coefficients R2 larger than 0.99. The linear sampling is approximately 6.0 nm per channel at [0°, 0°] and 5.83 nm per channel at [18°, 18°], respectively. The dependency of the AOI on the filter also reveals the slightly smaller dispersion for the outmost field positions.

 figure: Fig. 12.

Fig. 12. (a) Plot of the fitted wavelength over each channel at two different field positions $[{{\theta_x},{\theta_y}} ]= [{0^\circ ,0^\circ } ]$ and $[{{\theta_x},{\theta_y}} ]= [{18^\circ ,18^\circ } ]$ as well as the linear fit, respectively. (b) Plot of the resulting spectral resolution (FWHM) from the wavelength fit over each channel at two different field positions $[{{\theta_x},{\theta_y}} ]= [{0^\circ ,0^\circ } ]$ and $[{{\theta_x},{\theta_y}} ]= [{18^\circ ,18^\circ } ]$.

Download Full Size | PDF

In addition, Fig. 12(b) shows the resulting spectral resolution, described by the full-width of half maximum (FWHM), over each channel as in Fig. 12(a), respectively. The spectral resolution decreases (increasing FWHM) linearly with higher wavelength. The absolute value of the spectral resolution is slightly lower than approx. 2% of the fitted wavelength ${\lambda _f}$ over the whole spectral range. Furthermore, the spectral resolution is independent of the AOI of the filter indicated by almost the same values for the two different field positions.

In comparison to the system of Renhorn et. al [19], we achieve a comparable, high spectral resolution of less than 2% of the center wavelength. However, the spectral resolution is independent of the field point in our approach. The spectral resolution of Mu et. al [20] is approx. 4% of the center wavelength and, thus, larger than the values obtained here. The bandpass filters used in Meng et. al [21] provide significantly less spectral resolution of approx. 20 nm compared to our multispectral imaging system.

4. Experimental results

Finally, the goal of this section is the extraction and analysis of the spectral information of an extended scene and the comparison to a commercially spectrometer measurement in order to demonstrate the system’s capabilities. Additionally, the various calibration steps are illustrated, which are required for the proper determination of the spectral information. Several green leaves are arranged as a test scene. In particular, the scene comprises leaves of a natural plant in different aging stages and for comparison an artificial plastic leaf. The colors of the leaves vary from light green to dark green in the visible spectral range. The scene is illuminated with halogen tungsten lamps with a color temperature of 2900K. A raw image of 11 × 6 sub-images of the scene is captured with the multispectral camera at a distance of 50 cm (see Fig. 13(a)). For comparison, Fig. 13(b) shows the scene captured with a standard RGB camera as well as the marked positions on the leaves which will be analyzed. The pinnate leaf with the narrow leaflets located at the upper left part is artificial, whereas the other leaves belong to a natural plant sorted according to their age. The left leave corresponds to the youngest and the right leave to oldest. Note that the leaves are placed on a round, black mat in order to minimize stray light effects and to segment the objects of interest from the background.

 figure: Fig. 13.

Fig. 13. (a) Raw image of the multispectral camera with an array of 11 × 6 sub-images. (b) RGB-image of the object scene using a conventional camera for comparison purposes. (c) White reference corrected image of 2 × 2 neighboring channels. The spatial calibration is used to find the corresponding center positions (marked with four crosses) of the sub-images in order to extract them. (d) Spatially corrected image (cropped to 220 × 220 pixels) of the channels (-2;0), (-1;0), (-2;1) and (-1;1). (f) Overlay of the four channels corresponding to the spectral calibration scheme for the multispectral data cube acquisition. (e) Extraction of the spectral information for different object points in the scene.

Download Full Size | PDF

A diffuse reflectance target (Zenith Lite) is used as a ‘white’ reference with a calibrated Lambertian radiant surface of 95% in order to compensate the reflectance spectrum induced by the spectral distribution of the lamps. The corrected reflectance intensity spectrum ${I_{cor}}(x,y)$ is calculated related to [28]:

$${I_{cor}}(x,y) = \frac{{{I_s}(x,y) - {I_{s,dark}}(x,y)}}{{{I_w}(x,y) - {I_{w,dark}}(x,y)}}\cdot\frac{{{\tau _w}}}{{{\tau _s}}}, $$
where ${I_s}(x,y)$ and ${I_{s,dark}}(x,y)$ are the image intensities of the scene and the corresponding dark image, ${I_w}(x,y)$ and ${I_{w,dark}}(x,y)$ are the images of the diffuse target and the corresponding dark image, respectively. The quantities ${\tau _s}$ and ${\tau _w}$ are the exposure times of the corresponding images, respectively. A region of interest of 2 × 2 channels is used to show the white reference corrected image in Fig. 13 (c). Subsequently, the spatial calibration results are used from the previous section in order to determine the proper center positions of the channels which are marked with a cross for each channel, respectively. Afterwards, each channel is cropped to an image of 220 × 220 pixels for better visualization, which is shown in Fig. 13(d). The reflected intensity is represented in the four different channels by a particular gray value. The spectral calibration results are used now for the determination of the multispectral data cube as shown in Fig. 13(f). The shift of the spectral wavelength over the field account for a bended plane of each channel, respectively. The corrected images are superimposed with the spectral calibration points in the spectral cube. Finally, Fig. 13(e) depicts the spectral information of the four different plant leaves, which are extracted as pillars out of the data cube.

The reflectance spectra are obviously different between the natural and artificial leaves. The well-known red-edge effect [29] at around 700 nm is typical for natural leaves due to the chlorophyll content. However, the determination of spectral differences is challenging between the same natural leaves at various stages as seen in Fig. 13 (e). The intensity of the reflectance spectrum decreases in the range between 500 nm and 700 nm with increasing chlorophyll content from light green to dark green leaves [30]. A commercial spectrometer from Avantes (AvaSpec-ULS2048LTEC) is used in a spectral sampling configuration of 1 nm as a reference in comparison to our multispectral imaging system. The state one and three leaves are analyzed with the reference spectrometer. The behavior of the spectra are almost equal compared to our imaging system. In order to emphasize the high spectral sampling of our imaging system, the red-edge inflection point (REIP) is determined by fitting the steep red edge to all of the three natural leaves. The resulting REIP wavelength is 704 nm for state one, 710 nm for state two, 719 nm for state three and 704 nm for the reference state one and 723 nm for the reference state three. The REIP wavelength is slightly shifted towards longer wavelengths with increasing chlorophyll content of the leaves which is in-line with the findings presented in Ref. [30]. The capability to detect such small spectral changes in the order of a few nanometers shows the enormous potential of our imaging system and the significant advantage compared to conventional multispectral snapshot cameras in the area of remote sensing that only offer a few spectral channels [31].

5. Conclusion

We have presented a multi-aperture system approach for a compact snapshot multispectral imaging system, which combines state of the art micro-optical manufacturing methods, a multi-aperture imaging configuration and a slanted linear variable filter. The developed demonstration system is capable of capturing spectrally resolved, extended object fields in a single shot with high resolution. Furthermore, the proposed system design approach facilitates a high flexibility with respect to spatial and spectral resolution by tailoring the number of spectral channels and customizing the micro-optical design. The individual spectral channels can be optimized for their specific wavelength range in order to compensate for chromatic focal aberration. In addition, our approach is independent of a specific image sensor type and is thus transferable to different image sensor architectures and technologies such as InGaAs based detectors. Furthermore, the demonstrator is particularly applicable to remote sensing applications with drones, e.g. for agriculture and environmental monitoring, due to the small system size and the low weight. The realized demonstration system enables the snapshot acquisition of 66 spectral channels with a linear spectral sampling of about 6 nm covering an extended wavelength range between 450 nm and 850 nm. The camera demonstrator provides a large field of view of approximately 68° and a spatial sampling of approximately 400 × 400 pixels per channel. A spectral and spatial calibration procedure was additionally discussed, which is required for an accurate multispectral data fusion.

As a next step, the linear variable filter may be replaced by a tiled filter array in order to address smaller image sensor architectures. Moreover, the discrete filter wavelengths can be customized to address a specific application area. Furthermore, we want to expand our approach to other wavelength ranges such as the short wave infrared spectral range.

Funding

Micro- and nano structured optics for infrared (MIRO); Kooperationsprogramm Fachhochschulen; Fraunhofer-Gesellschaft (066/601039).

Acknowledgments

The authors would like to thank Bernd Höfer for the mechanical system layout and integration, and Simone Thau, who participated in the fabrication of the micro-optics module.

Disclosures

The authors declare no conflicts of interest.

References

1. T. Adão, J. Hruška, L. Pádua, J. Bessa, E. Peres, R. Morais, and J. J. Sousa, “Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry,” Remote Sens. 9(11), 1110 (2017). [CrossRef]  

2. D. Caballero, R. Calvini, and J. M. Amigo, “Chapter 3.3 - Hyperspectral imaging in crop fields: precision agriculture,” in Hyperspectral Imaging, vol. 32 of Data Handling in Science and Technology32J. M. Amigo, ed. (Elsevier, 2020), pp. 453–473.

3. P. Moghadam, D. Ward, E. Goan, S. Jayawardena, P. Sikka, and E. Hernandez, “Plant disease detection using hyperspectral imaging,” in 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), (IEEE, 2017), pp. 1–8.

4. J. Deal, A. Britain, T. Rich, and S. Leavesley, “Excitation-scanning hyperspectral imaging microscopy to efficiently discriminate fluorescence signals,” J. Visualized Exp. 150, e59448 (2019). [CrossRef]  

5. Y. Wang, W. Fu, Y. Shen, A. R. Badireddy, W. Zhang, and H. Huang, “Hyperspectral Imaging Microscopy of Acetaminophen Adsorbed on Multiwalled Carbon Nanotubes,” Langmuir 34(44), 13210–13218 (2018). [CrossRef]  

6. A. M. Melesse, Q. Weng, P. S. Thenkabail, and G. B. Senay, “Remote Sensing Sensors and Applications in Environmental Resources Mapping and Modelling,” Sensors (MDPI, Basel, Switzerland) 7(12), 3209–3241 (2007). [CrossRef]  

7. M. Moroni, E. Lupo, E. Marra, and A. Cenedese, “Hyperspectral Image Analysis in Environmental Monitoring: Setup of a New Tunable Filter Platform,” Procedia Environ. Sci. 19, 885–894 (2013). [CrossRef]  

8. X. Briottet, Y. Boucher, A. Dimmeler, A. Malaplate, A. Cini, M. Diani, H. Bekman, P. Schwering, T. Skauli, I. Kasen, I. Renhorn, L. Klasén, M. Gilmore, and D. Oxford, “Military applications of hyperspectral imagery,” in Targets and Backgrounds XII: Characterization and Representation, vol. 6239W. R. Watkins and D. Clement, eds., International Society for Optics and Photonics (SPIE, 2006), pp. 82–89.

9. M. Rosenberger and R. Celestre, “Smart multispectral imager for industrial applications,” in 2016 IEEE International Conference on Imaging Systems and Techniques (IST), (IEEE, 2016), pp. 7–12.

10. Y. Xu, A. Giljum, and K. F. Kelly, “A hyperspectral projector for simultaneous 3d spatial and hyperspectral imaging via structured illumination,” Opt. Express 28(20), 29740–29755 (2020). [CrossRef]  

11. M. E. Pawlowski, J. G. Dwight, T.-U. Nguyen, and T. S. Tkaczyk, “High performance image mapping spectrometer (IMS) for snapshot hyperspectral imaging applications,” Opt. Express 27(2), 1597–1612 (2019). [CrossRef]  

12. J. Allington-Smith, “Basic principles of integral field spectroscopy,” New Astron. Rev. 50(4-5), 244–251 (2006). [CrossRef]  

13. A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the Lyot filter and its application to snapshot spectral imaging,” Opt. Express 18(6), 5602–5608 (2010). [CrossRef]  

14. G. Wong, R. Pilkington, and A. R. Harvey, “Achromatization of Wollaston polarizing beam splitters,” Opt. Lett. 36(8), 1332–1334 (2011). [CrossRef]  

15. N. Hagen and E. L. Dereniak, “Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution,” Appl. Opt. 47(28), F85–F95 (2008). [CrossRef]  

16. N. Hagen and M. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

17. M. Hubold, R. Berlich, C. Gassner, R. Brüning, and R. Brunner, “Ultra-compact micro-optical system for multispectral imaging,” in MOEMS and Miniaturized Systems XVII, vol. 10545W. Piyawattanametha, Y.-H. Park, and H. Zappe, eds., International Society for Optics and Photonics (SPIE, 2018), pp. 206–213.

18. “Data Sheet Bifrost – Continuously Variable Bandpass Filter for Hyperspectral Imaging (LF103245),” https://www.deltaopticalthinfilm.com/wp-content/uploads/data-sheets/linear-variable-filters/LF103245.pdf. Available online: 26.10.2020.

19. I. G. E. Renhorn, D. Bergström, J. Hedborg, D. Letalick, and S. Möller, “High spatial resolution hyperspectral camera based on a linear variable filter,” Opt. Eng. 55(11), 114105 (2016). [CrossRef]  

20. T. Mu, F. Han, D. Bao, C. Zhang, and R. Liang, “Compact snapshot optically replicating and remapping imaging spectrometer (ORRIS) using a focal plane continuous variable filter,” Opt. Lett. 44(5), 1281–1284 (2019). [CrossRef]  

21. L. Meng, T. Sun, R. Kosoglow, and K. Berkner, “Evaluation of multispectral plenoptic camera,” in Digital Photography IX, vol. 8660N. Sampat and S. Battiato, eds., International Society for Optics and Photonics (SPIE, 2013), pp. 96–102.

22. R. Gonzalez and R. E. Woods, Digital image processing (Prentice Hall, Upper Saddle River, N.J,2002), chap. 4, 2nd ed.

23. T. Goossens, B. Geelen, J. Pichette, A. Lambrechts, and C. V. Hoof, “Finite aperture correction for spectral cameras with integrated thin-film Fabry-Perot filters,” Appl. Opt. 57(26), 7539–7549 (2018). [CrossRef]  

24. D. Daly, R. F. Stevens, M. C. Hutley, and N. Davies, “The manufacture of microlenses by melting photoresist,” Meas. Sci. Technol. 1(8), 759–766 (1990). [CrossRef]  

25. P. Dannberg, L. Erdmann, R. Bierbaum, A. Krehl, A. Bräuer, and E. B. Kley, “Micro-optical elements and their integration to glass and optoelectronic wafers,” Microsyst. Technol. 6(2), 41–47 (1999). [CrossRef]  

26. P. D. Burns, “Slanted-Edge MTF for Digital Camera and Scanner Analysis,” in PICS, (IS&T - The Society for Imaging Science and Technology, 2000), pp. 135–138.

27. International Organization for Standardization, ISO 12233:2017(en) Photography - Electronic still picture imaging - Resolution and spatial frequency responses, ISO 12233:2017-ed.3.0 (2017).

28. J. Burger and P. Geladi, “Hyperspectral NIR Image Regression Part I: Calibration and Correction,” J. Chemom. 19(5-7), 355–363 (2005). [CrossRef]  

29. I. Filella and J. Penuelas, “The red edge position and shape as indicators of plant chlorophyll content, biomass and hydric status,” Int. J. Remote Sens. 15(7), 1459–1470 (1994). [CrossRef]  

30. C. Buschmann and H. K. Lichtenthaler, “Contribution of Chlorophyll Fluorescence to the Reflectance of Leaves in Stressed Plants as Determined with the VIRAF-Spectrometer,” Zeitschrift für Naturforschung C 54(9-10), 849–857 (1999). [CrossRef]  

31. M. Franzini, G. Ronchetti, G. Sona, and V. Casella, “Geometric and radiometric consistency of parrot sequoia multispectral imagery for precision agriculture applications,” Appl. Sci. 9(24), 5314 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Working principle of the proposed multispectral imaging system capturing an object in remote sensing. Each channel captures only a certain spectral part of the object.
Fig. 2.
Fig. 2. (a) Cross-section of the optical system design principle including the linear variable filter (LVF), the micro-lens array (MLA), the baffle array and the image sensor with cover glass. Each lens of the MLA images the entire field-of-view (FOV) within a specific spectral range onto the image sensor. The baffle array prevents crosstalk between neighboring channels. (b) MTF diagrams for three different wavelengths (450 nm, 665 nm, 880 nm) and two different field positions – solid line for 0° and dashed line for 21°.
Fig. 3.
Fig. 3. (a) MLA with micro-lens pitches px and py oriented parallel to the outer edges of the LVF. Each channel in the same column of the MLA detects the same spectral information. The single channels are indicated by their position in x-direction from -5 to 5 and in y-direction from -2 to 3. The central channel is denoted as (0;0). (b) The LVF is tilted by approximately 9.5° w.r.t. the MLA in order to achieve a linear spectral sampling between the neighboring channels.
Fig. 4.
Fig. 4. (a) 2D Scheme of the angular dependent wavelength shift of the central wavelength transmitting through the LVF and the entrance pupil DAP of the MLA. (b) 3D Scheme of the oblique chief ray hitting the LVF, where the radial and axial incident angles on the surface of the LVF are marked.
Fig. 5.
Fig. 5. (a) Normalized wavelength shift ${{{\lambda _c}} / {{\lambda _c}(0^\circ )}}$ for ${\lambda _c}(0^\circ ) = 665\;nm$ dependent on the incidence angle ${\theta _r}$ in vertical direction (blue) and in horizontal direction (red). (b) 2D-representation (including tilt of the LVF) of the normalized wavelength shift ${{{\lambda _c}} / {{\lambda _c}(0^\circ )}}$ for ${\lambda _c}(0^\circ ) = 665\;nm$ dependent on all field position in the x-y-plane. The red dot marks the value for ${\theta _r} = 0^\circ $ and the red dotted concentric circles represent incidence angles of ${\theta _r} = \{{5^\circ ,10^\circ ,15^\circ ,20^\circ } \}$. The blue dot marks the maximum value in this plot.
Fig. 6.
Fig. 6. (a) Photograph of the KAI-16000 CCD image sensor included in the mechanical housing and (b) the multispectral micro-optical unit from the bottom showing the squared shaped holes of the baffle array. (c) Size comparison of the complete multispectral camera with a 2-Euro-coin. The size of the multispectral camera is only 60 × 60 × 28 mm3 and weights only 200g.
Fig. 7.
Fig. 7. On-axis MTF plot for three channels representing different spectral channels (channel (-5;0) – 835 nm, channel (0;0) – 665 nm, channel (5;0) – 475 nm). The dashed lines correspond to the simulated MTF of the channels, respectively.
Fig. 8.
Fig. 8. (a) Simulated and measured disparity values as a function of the object distance for two representative cases: the disparity between channel (0;0) and the adjacent channel (0;-1) is shown in green color; the disparity between channel (0;0) and the outermost channel (-5;3) is shown in blue color. (b) shows the disparity of each channel relative to the central channel (0;0) at an object distance of 1750 mm.
Fig. 9.
Fig. 9. Luminance camera image of the diffusing screen in front of the multi-aperture camera. The illuminated area of the screen has a diameter of approximately 60 mm. The red rectangle shows the size of the field that is recorded by all 66 channels of the multi-spectral camera.
Fig. 10.
Fig. 10. The normalized intensity response w.r.t. the wavelength is shown for three pixels in the center of the channels (-5;0), (0;0) and (5;0) as well as the fit of a Gaussian distribution to the measured data in the upper part. Results of the spectral calibration for three different channels with 834.4 nm (a), 665.4 nm (b) and 474.2 nm (c) maximum wavelength, respectively. The blue dot marks the position of the maximum wavelength position in each channel.
Fig. 11.
Fig. 11. Plot of fitted wavelength normalized to the maximum wavelength 834.4 nm, 665.4 nm and 474.2 nm over the field angle (scan through the center) in x- and y-direction shown in the left and right plot, respectively.
Fig. 12.
Fig. 12. (a) Plot of the fitted wavelength over each channel at two different field positions $[{{\theta_x},{\theta_y}} ]= [{0^\circ ,0^\circ } ]$ and $[{{\theta_x},{\theta_y}} ]= [{18^\circ ,18^\circ } ]$ as well as the linear fit, respectively. (b) Plot of the resulting spectral resolution (FWHM) from the wavelength fit over each channel at two different field positions $[{{\theta_x},{\theta_y}} ]= [{0^\circ ,0^\circ } ]$ and $[{{\theta_x},{\theta_y}} ]= [{18^\circ ,18^\circ } ]$.
Fig. 13.
Fig. 13. (a) Raw image of the multispectral camera with an array of 11 × 6 sub-images. (b) RGB-image of the object scene using a conventional camera for comparison purposes. (c) White reference corrected image of 2 × 2 neighboring channels. The spatial calibration is used to find the corresponding center positions (marked with four crosses) of the sub-images in order to extract them. (d) Spatially corrected image (cropped to 220 × 220 pixels) of the channels (-2;0), (-1;0), (-2;1) and (-1;1). (f) Overlay of the four channels corresponding to the spectral calibration scheme for the multispectral data cube acquisition. (e) Extraction of the spectral information for different object points in the scene.

Tables (1)

Tables Icon

Table 1. Final design parameters of the MLA and baffle array of the multispectral imaging system

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

α = arctan ( p x n p y ) .
α = arctan ( 1 n ) .
λ c ( θ ) = λ c ( 0 ) cos ( θ n e f f ) .
λ c ( θ r ) = ( λ c ( 0 ) + δ L V F d s ( θ x ) ) cos ( θ r n e f f ) ,
θ r = arctan ( tan ( θ x ) 2 + tan ( θ y ) 2 )
d s ( θ x ) = tan ( θ x ) h M L A L V F .
Δ x = b f z .
I c o r ( x , y ) = I s ( x , y ) I s , d a r k ( x , y ) I w ( x , y ) I w , d a r k ( x , y ) τ w τ s ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.