Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate ray tracing model of an imaging system based on image mapper

Open Access Open Access

Abstract

The image mapper plays a key role in the imaging process of the image mapping spectrometer (IMS), which is a snapshot imaging spectrometer with superiority in light throughput, temporal resolution, and compactness. In this paper, an accurate ray tracing model of the imaging units of the IMS, especially the image mapper, is presented in the form of vector operation. Based on the proposed model, the behavior of light reflection on the image mapper is analyzed thoroughly, including the precise position of the reflection point and interaction between adjacent facets. Rigorous spatial correspondence between object points and pixels on the detector is determined by tracing the chief ray of an arbitrary point in the field. The shadowing effect, which is shadowing between adjacent facets and blocking caused by the facets’ side walls, is analyzed based on our model. The experimental results verify the fidelity of the model and the existence of the shadowing effect. The research is meaningful for comprehending the imaging mechanism of the IMS and facilitates the design and analysis process in the future.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The image mapping spectrometer (IMS) is a snapshot imaging spectrometer that can capture two-dimensional spatial information and one-dimensional spectral information of an object in a single exposure time. Different from traditional imaging spectrometers based on scanning in spatial or spectral domains [1,2], the IMS is free of motion artifacts and suitable for real-time application. Over the years, it has been successfully applied to biomedical imaging from cellular [3,4] to tissue levels [59] and remote sensing such as gas detection [10] and environmental monitoring [11]. Briefly, the operating principle of the IMS is slicing the image formed by the fore optics and mapping image slices into two-dimensional directions by using a miniaturized component termed the image mapper, which is comprised of mirror facets. After being collimated and dispersed, the object’s data cube is reimaged on the detector. It is detailed in [3,12].

The prototype of the IMS was first introduced by Gao et al. in 2009 [12]. Since then, researchers have been focusing on improving the performance of the IMS with respect to hardware and theory. The spatial and spectral resolution of the IMS is limited to the design of the image mapper. Kester et al. presented the design and fabrication method of the image mapper based on raster fly cutting in [13]. Nguyen et al. reported a ruling technique that surpasses raster fly cutting in fabrication speed and consistency of facet widths [14]. Yuan et al. put forward a novel design of the image mapper which could reduce cross talk between adjacent sub-pupils [15]. Besides, other optical design improvements such as removal of the beam expander, integration with an Amici prism array [6], incorporation of a mirror relay [11], and use of a sCMOS detector [16] optimized the system in compactness, high light throughput, fast frame rate, and SNR increase. In regard to theory, development of the imaging model [13,1719] and calibration procedure [3,20] facilitated the design process and improved reconstruction results. It is necessary to build an accurate imaging model to predict the performance of the IMS. The model based on the diffraction theory is suitable for cross talk analysis [13,18]. The geometric optical model builds the one-to-one mapping relationship between voxels in the data cube and pixels on the detector. Gao proposed a 3D reflection model of the image mapper [17] and Ding further presented the geometric optical model of the whole system [19].

However, several problems remain to be addressed. Conceptually, the object is divided into multiple vertical image lines on the detector, and void regions are created for dispersion. In actuality, the image lines are tilted and incomplete to different extents under different sub-pupils. The tilt comes from the spatial distribution of reflection points on the image mapper, which is not a serious problem after system calibration. The incompleteness of image lines is an image artifact and is attributed to the shadowing effect. The shadowing effect was first mentioned in [12] and interpreted as the “neighboring slice’s side walls shadowing other slices”. It was ignored in the prototype of the IMS because the tilt angles of the facets were small [12]. However, a beam expander was necessary to magnify the pupil array to match the dimension of the reimaging lens array. Bigger tilt angles were adopted to provide larger spacing between the sub-pupils to remove the beam expander to improve light throughput [6]. Currently, the maximum tilt angle is about 10 times that of the prototype. The shadowing effect occurs when light gets blocked by adjacent facets, causing shadowing between them, or when light is blocked by the side walls of the facets. This effect eventually leads to incompleteness of the image lines and loss of spatial information on the image plane. Hence, the shadowing effect should be considered.

As the key component in the IMS, the light reflection behavior on the image mapper is of great importance in the ray tracing model. Gao’s reflection model [17] utilized the light reflection law to derive the trigonometric relation between the facet’s tilt angle and the reflected ray’s steering angle. Besides the orientation of the reflected ray, the exact reflection point is also needed for tracing the image point, which was not considered in Gao’s model. Ding made an approximation that the reflection point was the intersection point of the incident ray and the image mapper substrate plane [19]. It was based on the hypothesis that all reflection points were on the same plane, which could not account for the tilt of the image lines. Furthermore, in the models mentioned above, the densely arranged mirror facets were modeled independently without considering their influence on each other, which is responsible for the shadowing effect. Consequently, a more accurate ray tracing model is needed to interpret the tilt and incompleteness of the image lines.

The prism is also an important component in the IMS for dispersion. However, it causes the “smile” effect [21], which is a geometric distortion causing the curvature of image lines. The superposition of tilt and curvature of image lines makes it difficult to analyze the effects induced by the image mapper independently. Therefore, the prism is excluded in the ray tracing model to facilitate the analysis of the imaging characterization of the image mapper. The system without the prism is denoted as the image slicing and mapping system (ISMS).

In this research, we present an accurate ray tracing model of the ISMS based on vector operation. It is a method of ray tracing that has been commonly used in computer vision [22], prism-based beam steering systems [23], etc. Ray tracing is implemented from surface to surface and requires little computation, which makes a good trade-off between accuracy and computation. Furthermore, the shadowing effect is analyzed using the presented model. The content of the paper is organized as follows. The operating principle of the ISMS is reviewed and the ray tracing model is presented in Section 2. Simulation results including analysis of the shadowing effect are presented in Section 3. The experimental results in Section 4 verify the existence of the shadowing effect and the fidelity of the presented model. Conclusions are drawn and future work is discussed in Section 5.

2. Imaging methodology of the ISMS

2.1 General principle

The schematic layout of the ISMS is presented in Fig. 1. The object is imaged on the custom-fabricated image mapper by the fore optics, which is telecentric in its image space. The image mapper is composed of multiple mirror facets, which have different two-dimensional tilt angles with respect to the substrate plane. The facets are grouped into periodic blocks along the length of the image mapper. For simplicity, the figure only shows two blocks incorporating four facets on each block. The image mapper divides the intermediate image into image slices and maps the sliced images into different directions determined by the facets’ tilt angles. The reflected light is collimated by the collimating lens and subsequently converged on the detector by the reimaging lens array. The image lines are encoded with spatial information of the object and can be easily reconstructed using a remapping algorithm.

 figure: Fig. 1.

Fig. 1. Schematic layout of the ISMS.

Download Full Size | PDF

2.2 Ray tracing model

The coordinate system of the ISMS is shown in Fig. 2. All the lenses in the ISMS are modeled as ideal thin lenses to simplify the ray tracing calculation, where the fore optics is denoted as L1, the collimating lens as L2, and the reimaging lens as L3. Here, $\def\upmu{\unicode[Times]{x03BC}} z_1$ is the distance between the object and the aperture stop, $z_2$ is the image distance of the fore optics, and $f_1$, $f_2$, and $f_3$ are the focal lengths of L1, L2, and L3, respectively. The local coordinate systems are established for all elements, such as ($x_{\text o},y_{\text o}$) for the object, ($\xi ,\eta$) for the aperture stop, and ($x_{\text L1},y_{\text L1}$) for the principal plane of L1.

 figure: Fig. 2.

Fig. 2. Coordinate system of the ISMS.

Download Full Size | PDF

2.2.1 Fore optics model

It is common to trace the chief ray to locate the corresponding image point of a certain object point. The imaging model of the fore optics is depicted in Fig. 3. Because the fore optics is telecentric in the image space, the chief ray is parallel to the optical axis z. According to the pinhole model [22], a point $\textbf {P}_{\textrm {o}}=(x_{\textrm {o}},y_{\textrm {o}},z_{\textrm {o}})^{\mathsf T}$ on the object plane is mapped to the point $\textbf {P}_{\textrm {p}}=(x_{\textrm {p}},y_{\textrm {p}},z_{\textrm {p}})^{\mathsf T}$ on the primary image plane. Ignoring the z component of the coordinates, the mapping relationship is

$$\begin{bmatrix} x_{\text p}\\ y_{\text p} \end{bmatrix}=\begin{bmatrix} -\dfrac{f_1}{z_1} & 0\\ 0 & -\dfrac{f_1}{z_1} \end{bmatrix}\begin{bmatrix} x_{\text o}\\ y_{\text o} \end{bmatrix}$$

 figure: Fig. 3.

Fig. 3. Layout diagram of the fore optics.

Download Full Size | PDF

To separate incident and reflected rays, we rotate the image mapper at an angle $\theta$ relative to the primary image plane ${x_{\text p}}{o_{\text p}}{y_{\text p}}$. The substrate plane of the image mapper is the reference plane for cutting each facet, which is denoted as ${x_{\text s}}{o_{\text s}}{y_{\text s}}$. The ray intersects the substrate plane at the point $\textbf {P}_{\textrm {s}}=(x_{\textrm {s}},y_{\textrm {s}},z_{\textrm {s}})^{\mathsf T}$, which is expressed as

$$\begin{bmatrix} x_{\mathrm s}\\ y_{\mathrm s} \end{bmatrix}=\begin{bmatrix} \dfrac{1}{\cos \theta} & 0\\ 0 & 1 \end{bmatrix}\begin{bmatrix} x_{\mathrm p}\\ y_{\mathrm p} \end{bmatrix}$$

2.2.2 Reflection model of image mapper

In Ding’s model [19], an approximation is made that the point $\mathbf {P}_{\text s}$ is the reflection point. Actually, owing to the tilt angle of each facet plane with respect to the substrate plane, the exact reflection point $\textbf {P}_{\textrm {r}}=(x_{\textrm {r}},y_{\textrm {r}},z_{\textrm {r}})^{\mathsf T}$ does not coincide with $\mathbf {P}_{\text s}$.

The reflection model of the image mapper is illustrated in Fig. 4; only five facet planes are shown for clarity. The incident chief ray is parallel to the optical axis z; the normalized direction vector under the ${x_{\text s}}{y_{\text s}}{z_{\text s}}$ coordinate system can be expressed as $\boldsymbol {\hat {\textbf {I}}}=(-\sin \theta ,0,\cos \theta )^{\mathsf T}$. The equation of the incident ray that passes through point $\mathbf {P}_{\mathrm s}$ in the direction parallel to $\boldsymbol {\hat {\textbf {I}}}$ can be described as follows, where t is any real number:

$$\left\{ \begin{aligned} {\text X} & =x_{\text s}-t\sin\theta\\ {\text Y} & =y_{\text s}\\ {\text Z} & =t\cos\theta \end{aligned} \right.$$

 figure: Fig. 4.

Fig. 4. Reflection model of the image mapper.

Download Full Size | PDF

The initial normal vector of each facet is $\boldsymbol {\hat {\textbf {N}}}_0=(0,0,-1)^{\mathsf T}$ with respect to the substrate plane. The facet then rotates $\alpha _{m,n}$ around the $x_{\mathrm s}$ axis and subsequently rotates $\beta _{m,n}$ around the $y_{\mathrm s}$ axis, where $m=1,2,\ldots ,M, n=1,2,\ldots ,N$, M is the number of tilt angles, and N is the number of blocks. The new normal vector of each facet can be given as follows:

$$\boldsymbol{\hat{\textbf{N}}}_{m,n}=\begin{bmatrix} \cos\beta_{m,n} & 0 & \sin\beta_{m,n}\\ 0 & 1 & 0\\ -\sin\beta_{m,n} & 0 & \cos\beta_{m,n} \end{bmatrix}\begin{bmatrix} 1 & 0 & 0\\ 0 & \cos\alpha_{m,n} & -\sin\alpha_{m,n}\\ 0 & \sin\alpha_{m,n} & \cos\alpha_{m,n} \end{bmatrix} \boldsymbol{\hat{\textbf{N}}}_0$$
The facet plane with normal vector $\boldsymbol {\hat {\textbf {N}}}_{m,n}$ is moved from the origin to point $(kb,0,0)^{\mathsf {T}}$, where b is the width of the facet, and $k=(MN+1)/2-[m+(n-1)M]$. Hence, the equation of the facet plane is given by
$$-\cos\alpha_{m,n}\sin\beta_{m,n}(\text X-kb)+\sin\alpha_{m,n}\text Y-\cos\alpha_{m,n}\cos\beta_{m,n}\text Z=0$$
The coordinate of reflection point $\mathbf {P}_{\mathrm r}$ at the corresponding facet can be determined by solving Eqs. (3) and (5). According to Snell’s law [24], once the incident ray and normal vectors are known, the reflected ray vector is determined. The normalized direction vector of the reflected ray $\boldsymbol {\hat {\textbf {R}}}_{1m,n}$ can be calculated by
$$\boldsymbol{\hat{\textbf{R}}}_{1m,n}=-2(\boldsymbol{\hat{\textbf{I}}}\cdot\boldsymbol{\hat{\textbf{N}}}_{m,n})\boldsymbol{\hat{\textbf{N}}}_{m,n}+\boldsymbol{\hat{\textbf{I}}}$$

2.2.3 Collimating and reimaging model

As shown in Fig. 5 the direction of the optical axis is converted to $-z'$ axis, which is at an angle 2$\theta$ with respect to the z axis in Fig. 3. The substrate plane is rotated counterclockwise at an angle $\theta$ around the $y_{\text s}$ axis to obtain a virtual plane that is perpendicular to the new optical axis. The coordinates of the point and vector need to be transformed from the $x_{\text s}y_{\text s}z_{\text s}$ to $x_{\text v}y_{\text v}z_{\text v}$ system. The reflection point $\mathbf {P}_{\text r}$ is replaced by $\mathbf {P}_{\text v}=\text T\mathbf {P}_{\text r}$ and the reflected ray vector $\boldsymbol {\hat {\textbf {R}}}_{1m,n}$ is replaced by $\boldsymbol {\hat {\textbf {R}}}_{2m,n}=\text T\boldsymbol {\hat {\textbf {R}}}_{1m,n}$. The transformation matrix T is given by

$$\mathrm{T}=\begin{bmatrix} \cos\theta & 0 & -\sin\theta\\ 0 & 1 & 0\\ \sin\theta & 0 & \cos\theta \end{bmatrix}$$
According to similar triangles, the coordinate of point $\mathbf {P}_2$ is given by
$$\mathbf{P}_2=-\frac{z_{\text v}+f_2}{z_{2m,n}}\boldsymbol{\hat{\textbf{R}}}_{2m,n}+\mathbf{P}_{\text v}$$
where $z_{\text v}$ and $z_{2m,n}$ are the z components of $\mathbf {P}_{\text v}$ and $\boldsymbol {\hat {\textbf {R}}}_{2m,n}$, respectively.

 figure: Fig. 5.

Fig. 5. Schematic diagram of reimaging process.

Download Full Size | PDF

The parallel rays in the object space of the lens will be converged to the same point on its focal plane [25], the coordinate of point $\mathbf {P}_3$, which is also the center of the sub-pupil and is given by

$$\mathbf{P}_3=-\frac{f_2}{z_{2m,n}}\boldsymbol{\hat{\textbf{R}}}_{2m,n}+(0,0,-f_2)^{\mathsf T}$$
Hence, the incident ray vector of the reimaging lens is calculated by
$$\boldsymbol{\hat{\textbf{R}}}_{3m,n}=\frac{\mathbf{P}_3-\mathbf{P}_2}{|\mathbf{P}_3-\mathbf{P}_2|}$$
The chief ray emerges from L3 along its original direction and the final image point is derived as follows:
$$\mathbf{P}_{\text i}=-\frac{f_3}{z_{3m,n}}\boldsymbol{\hat{\textbf{R}}}_{3m,n}+\mathbf{P}_3$$
Considering the discrete sampling of the detector and the origin at the top-left of the image, the index of pixel $(i_{\mathrm d},j_{\mathrm d})$ can be derived by
$$\left\{ \begin{aligned} i_\text d & =\biggl\lceil\frac{-y_\text i}{d_{\textrm{pixel}}}\biggr\rceil+\frac{M_\text d}{2}\\ j_\text d & =\biggl\lceil\frac{x_\text i}{d_{\textrm{pixel}}}\biggr\rceil+\frac{N_\text d}{2} \end{aligned} \right.$$
where $1 \leqslant i_{\text d} \leqslant M_{\text d}$, $1 \leqslant j_{\text d} \leqslant N_{\text d}$. $M_{\text d}$ and $N_{\text d}$ are the numbers of pixels in the vertical and horizontal directions, respectively, and $d_{\textrm {pixel}}$ is the width of the pixel.

3. Simulation results

3.1 Imaging results

The imaging process of the ISMS is simulated using MATLAB software. An extended source with uniform intensity is chosen as the object to highlight the profile of the image lines. Two models are simulated for comparison. One model is Ding’s model, which assumes that the intersection point of the incident ray and substrate plane is the reflection point, denoted as model 1. The second model is the one derived in Section 2.2, which calculates the exact reflection point on the facet, denoted as model 2. The simulation parameters are shown in Table 1. The simulation results are presented in Fig. 6, along with the partial enlargement of two sub-pupils.

 figure: Fig. 6.

Fig. 6. Imaging simulation results of the ISMS. (a) Model 1. (b) Model 2. Enlargements of sub-pupil #1 and #23 are shown.

Download Full Size | PDF

Tables Icon

Table 1. Simulation parameters of the ISMS.

As shown in Fig. 6(a), with the same parameters, the results of model 1 do not have tilted image lines because of the approximation of the reflection points. However, in the results of model 2, the tilt of the image lines is evident and the slopes vary with sub-pupils, which depend on the tilt angles of facets. Moreover, there are varying degrees of incompleteness of the image lines under different sub-pupils, the reason for which will be discussed in Section 3.2.

3.2 Analysis of shadowing effect

The shadowing effect mechanism is illustrated in Fig. 7. Because of the height difference between facets and the overall tilt of the image mapper, two phenomena contribute to the shadowing effect. One phenomenon is that the incident ray aimed at the $k^{\textrm {th}}$ facet might be reflected by the $(k-1)^{\textrm {th}}$ facet, which causes incomplete image lines under corresponding sub-pupil, such as the sub-pupil #1 shown in Fig. 6(b). The other phenomenon is that the ray from the object space is not reflected by any facet, but blocked by the facet’s side wall. In other words, some spatial information of the object is lost. Eventually, both shadowing and blocking lead to loss of information on the image plane, which is unfavorable for data reconstruction.

 figure: Fig. 7.

Fig. 7. Illustration of shadowing effect.

Download Full Size | PDF

Each facet of the image mapper divides and redirects an image slice. Therefore, the corresponding sub field of view (FOV) of each facet in the object space can be calculated through backward mapping based on our model. The sub FOVs of the facets are displayed in Fig. 8.

 figure: Fig. 8.

Fig. 8. Sub FOVs of mirror facets in object space. (a) One block, $\theta =0^\circ$, angle set A. (b) One block, $\theta =22.5^\circ$, angle set A. (c) One block, $\theta =22.5^\circ$, angle set B. (d) Three blocks, $\theta =22.5^\circ$, angle set A.

Download Full Size | PDF

As indicated in Fig. 8, the object points in the overlap and interspace area encounter shadowing and blocking, respectively. In Fig. 8(a), the overall tilt angle of the image mapper $\theta =0^\circ$. The tilt angles of the facets are listed in Table 1, noted as angle set A. The shadowing effect arises from the spatial arrangement of the facets. In Fig. 8(b), the image mapper is positioned at an angle to the incident rays, which is $22.5^\circ$ in our case. The increased overlap and interspace means the system is more affected by the shadowing effect. In Fig. 8(c), the overall tilt angle is still $22.5^\circ$, while the tilt angles of the facets are reduced to one fifth of angle set A, denoted as angle set B. This shows that with smaller tilt angles, the shadowing effect is not a severe problem. Figure 8(d) presents the FOV of three blocks. It should be noted that the facets at the block boundaries are the most affected by the shadowing effect because the height difference between neighboring facets is the greatest there.

Hence, a trade-off between the light throughput and image artifacts is caused by the shadowing effect. According to the analysis, the shadowing effect is susceptible to both $\theta$ and $(\alpha _{m,n},\beta _{m,n})$. In [14], a beam splitter is used to separate incident and reflected rays, and $\theta$ is set as zero. This minimizes the shadowing effect but inherently decreases the light throughput. On the other hand, the image mapper with small tilt angles needs a beam expander to match the reimaging lens array, but it decreases the light throughput by over 50%, as described in [12], along with a decrease in compactness.

4. Experimental results

The imaging experiment was conducted to verify the simulation results based on the presented model. The experimental setup is presented in Fig. 9. The system was illuminated uniformly by an LED source combined with a diffuser. A 3 cm diameter aperture stop was placed on the front focal plane of the fore optics (FL = 60 mm, f/2.8D, Nikon) to ensure telecentricity. The intermediate image was sliced by a 69-faceted image mapper, of which the tilt angles are the same as those summarized in Table 1, and reflected in 23 directions. The image segments were then collimated by the collimating lens (FL = 50 mm, f/1.4D, Nikon). Finally, they were imaged on the detector (VA-29M, Vieworks) by a reimaging lens (GCL-010618, Daheng Optics) combined with a three-dimensional translation stage.

 figure: Fig. 9.

Fig. 9. Experimental setup of the ISMS. (a) Schematic diagram. (b) Actual setup.

Download Full Size | PDF

A qualitative comparison of the simulation and experimental results is shown in Fig. 10. The tilt and incompleteness of the image lines predicted by the simulation results also appear in the experimental results. It should be noted that the incompleteness of the image lines is more serious in the experimental results because the image mapper is also affected by the edge-eating effect during fabrication [13], which decreases the effective reflection area of the facet. In conclusion, the experimental results verify the validity of the theoretical model and the analysis of the shadowing effect. The enlargements of the experimental results are contrast-stretched to display the ghost image, which will be discussed in detail in Section 5.

 figure: Fig. 10.

Fig. 10. Comparison of simulation and experimental results. (a) Simulation results. (b) Experimental results. Enlargements of sub-pupil #1 and #23 are shown.

Download Full Size | PDF

The ray tracing model does not consider the non-uniformity of the light intensity; hence, the slope angle, the included angle between the image line and the vertical line, is selected as the evaluation metric to quantitatively compare the results. The slope angles measured from the simulation results of model 1, model 2, and the experimental results are presented in Fig. 11. Overall, model 1 fails to predict the tilt of the image lines inasmuch as the reflection points are approximated. The simulation results of model 2, which is the proposed model, show more consistency with the experimental results. The mean error between the simulation results of model 2 and the experimental results is $0.057^\circ$, and the standard deviation is $0.121^\circ$. The slight deviation might be attributed to the fabrication error of the image mapper and the assembly error of the system. One major error source is the edge-eating effect because the slope angle extraction algorithm uses line fitting based on least squares, which is susceptible to the profile of the image lines.

 figure: Fig. 11.

Fig. 11. Slope angle of image line.

Download Full Size | PDF

5. Discussion and conclusion

In this paper, we presented an accurate ray tracing model of the ISMS based on vector operation. In addition to building the spatial relationship between the object point and pixel index on the detector, the model can be used to analyze the shadowing effect arising from the height difference between adjacent facets and the overall tilt of the image mapper. The simulation results were confirmed by the experimental results and showed good consistency.

The ghost image appeared in the experimental results because of the secondary reflection on the side walls of the facets after primary reflection. Therefore, the reflected ray deviated from its original propagation path. It is difficult to explain this phenomenon based on the presented model because only the facet plane is modeled as a reflective surface and the side wall is not included. We are planning to further analyze this problem by simulating all possible reflective surfaces. In addition, the edge-eating effect should be considered in the model because it can hardly be eliminated during fabrication.

Owing to the shadowing effect, some spatial information is lost on the image plane. Therefore, image inpainting is necessary after image reconstruction, which is to be undertaken by our team.

In summary, accurate modeling and thorough comprehension of the ISMS is the basis of exploiting the potential of the IMS. It plays an important role in the field of snapshot spectrometry.

Funding

National Natural Science Foundation of China (61635002); Natural Science Foundation of Beijing Municipality (4172038); Fundamental Research Funds for the Central Universities.

Acknowledgments

Anqi Liu and Lijuan Su are co-first authors. We are grateful to the editors and reviewers. Their advice helped us improve the quality and readability of this paper.

References

1. R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J. Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, M. R. Olah, and O. Williams, “Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS),” Remote. Sens. Environ. 65(3), 227–248 (1998). [CrossRef]  

2. D. A. Glenar, D. L. Blaney, and J. J. Hillman, “AIMS: acousto-optic imaging spectrometer for spectral mapping of solid surfaces,” Acta Astronaut. 52(2-6), 389–396 (2003). [CrossRef]  

3. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18(14), 14330–14344 (2010). [CrossRef]  

4. A. D. Elliott, L. Gao, A. Ustione, N. Bedard, R. Kester, D. W. Piston, and T. S. Tkaczyk, “Real-time hyperspectral fluorescence imaging of pancreatic β-cell dynamics with the image mapping spectrometer,” J. Cell Sci. 125(20), 4833–4840 (2012). [CrossRef]  

5. L. Gao, N. Bedard, N. Hagen, R. T. Kester, and T. S. Tkaczyk, “Depth-resolved image mapping spectrometer (IMS) with structured illumination,” Opt. Express 19(18), 17439–17452 (2011). [CrossRef]  

6. R. T. Kester, N. Bedard, L. Gao, and T. S. Tkaczyk, “Real-time snapshot hyperspectral imaging endoscope,” J. Biomed. Opt. 16(5), 056005 (2011). [CrossRef]  

7. L. Gao, R. T. Smith, and T. S. Tkaczyk, “Snapshot hyperspectral retinal camera with the image mapping spectrometer (IMS),” Biomed. Opt. Express 3(1), 48–54 (2012). [CrossRef]  

8. Z. Lavagnino, J. Dwight, A. Ustione, T.-U. Nguyen, T. S. Tkaczyk, and D. W. Piston, “Snapshot hyperspectral light-sheet imaging of signal transduction in live pancreatic islets,” Biophys. J. 111(2), 409–417 (2016). [CrossRef]  

9. A. Shadfan, H. Darwiche, J. Blanco, A. Gillenwater, R. Richards-Kortum, and T. S. Tkaczyk, “Development of a multimodal foveated endomicroscope for the detection of oral cancer,” Biomed. Opt. Express 8(3), 1525–1535 (2017). [CrossRef]  

10. R. T. Kester, N. Bedard, and T. S. Tkaczyk, “Image mapping spectrometry - a novel hyperspectral platform for rapid snapshot imaging,” Proc. SPIE 8048, 80480J (2011). [CrossRef]  

11. J. G. Dwight, T. S. Tkaczyk, D. Alexander, M. E. Pawlowski, R.-I. Stoian, J. C. Luvall, P. F. Tatum, and G. J. Jedlovec, “Compact snapshot image mapping spectrometer for unmanned aerial vehicle hyperspectral imaging,” J. Appl. Rem. Sens. 12(4), 044004 (2018). [CrossRef]  

12. L. Gao, R. T. Kester, and T. S. Tkaczyk, “Compact image slicing spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express 17(15), 12293–12308 (2009). [CrossRef]  

13. R. T. Kester, L. Gao, and T. S. Tkaczyk, “Development of image mappers for hyperspectral biomedical imaging applications,” Appl. Opt. 49(10), 1886–1899 (2010). [CrossRef]  

14. T.-U. Nguyen, M. C. Pierce, L. Higgins, and T. S. Tkaczyk, “Snapshot 3D optical coherence tomography system using image mapping spectrometry,” Opt. Express 21(11), 13758–13772 (2013). [CrossRef]  

15. X. Ding, Y. Yuan, L. Su, W. Wang, Z. Ai, and A. Liu, “Modeling and optimization of image mapper for snapshot image mapping spectrometer,” IEEE Access 6, 29344–29352 (2018). [CrossRef]  

16. M. E. Pawlowski, J. G. Dwight, T.-U. Nguyen, and T. S. Tkaczyk, “High performance image mapping spectrometer (IMS) for snapshot hyperspectral imaging applications,” Opt. Express 27(2), 1597–1612 (2019). [CrossRef]  

17. L. Gao and T. S. Tkaczyk, “Correction of vignetting and distortion errors induced by two-axis light beam steering,” Opt. Eng. 51(4), 043203 (2012). [CrossRef]  

18. Y. Yuan, X. Ding, L. Su, and W. Wang, “Modeling and analysis for the image mapping spectrometer,” Chin. Phys. B 26(4), 040701 (2017). [CrossRef]  

19. X. Ding and Y. Yuan, “Geometric optical model of the snapshot image mapping spectrometer for data cube recovery,” IEEE Access 7, 28903–28912 (2019). [CrossRef]  

20. N. Bedard, N. Hagen, L. Gao, and T. S. Tkaczyk, “Image mapping spectrometry: calibration and characterization,” Opt. Eng. 51(11), 111711 (2012). [CrossRef]  

21. L. Yuan, J. Xie, Z. He, Y. Wang, and J. Wang, “Optical design and evaluation of airborne prism-grating imaging spectrometer,” Opt. Express 27(13), 17686–17700 (2019). [CrossRef]  

22. R. Hartley and A. Zisserman, Multiple view geometry in computer vision (Cambridge University, 2003), chap. 6.

23. Y. Li, “Closed form analytical inverse solutions for Risley-prism-based beam steering systems in different configurations,” Appl. Opt. 50(22), 4302–4309 (2011). [CrossRef]  

24. M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), chap. 3.

25. W. J. Smith, Modern Optical Engineering (McGraw-Hill, 2000), chap. 2.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic layout of the ISMS.
Fig. 2.
Fig. 2. Coordinate system of the ISMS.
Fig. 3.
Fig. 3. Layout diagram of the fore optics.
Fig. 4.
Fig. 4. Reflection model of the image mapper.
Fig. 5.
Fig. 5. Schematic diagram of reimaging process.
Fig. 6.
Fig. 6. Imaging simulation results of the ISMS. (a) Model 1. (b) Model 2. Enlargements of sub-pupil #1 and #23 are shown.
Fig. 7.
Fig. 7. Illustration of shadowing effect.
Fig. 8.
Fig. 8. Sub FOVs of mirror facets in object space. (a) One block, $\theta =0^\circ$ , angle set A. (b) One block, $\theta =22.5^\circ$ , angle set A. (c) One block, $\theta =22.5^\circ$ , angle set B. (d) Three blocks, $\theta =22.5^\circ$ , angle set A.
Fig. 9.
Fig. 9. Experimental setup of the ISMS. (a) Schematic diagram. (b) Actual setup.
Fig. 10.
Fig. 10. Comparison of simulation and experimental results. (a) Simulation results. (b) Experimental results. Enlargements of sub-pupil #1 and #23 are shown.
Fig. 11.
Fig. 11. Slope angle of image line.

Tables (1)

Tables Icon

Table 1. Simulation parameters of the ISMS.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

[ x p y p ] = [ f 1 z 1 0 0 f 1 z 1 ] [ x o y o ]
[ x s y s ] = [ 1 cos θ 0 0 1 ] [ x p y p ]
{ X = x s t sin θ Y = y s Z = t cos θ
N ^ m , n = [ cos β m , n 0 sin β m , n 0 1 0 sin β m , n 0 cos β m , n ] [ 1 0 0 0 cos α m , n sin α m , n 0 sin α m , n cos α m , n ] N ^ 0
cos α m , n sin β m , n ( X k b ) + sin α m , n Y cos α m , n cos β m , n Z = 0
R ^ 1 m , n = 2 ( I ^ N ^ m , n ) N ^ m , n + I ^
T = [ cos θ 0 sin θ 0 1 0 sin θ 0 cos θ ]
P 2 = z v + f 2 z 2 m , n R ^ 2 m , n + P v
P 3 = f 2 z 2 m , n R ^ 2 m , n + ( 0 , 0 , f 2 ) T
R ^ 3 m , n = P 3 P 2 | P 3 P 2 |
P i = f 3 z 3 m , n R ^ 3 m , n + P 3
{ i d = y i d pixel + M d 2 j d = x i d pixel + N d 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.