Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Super-resolution for a dispersive spectrometer using a tilted area sensor and spectrally varying blur kernel interpolation

Open Access Open Access

Abstract

The grating, lens, and linear sensor determine a spectrometer’s wavelength resolution and measurement range. While conventional methods have tried to improve the optical design to obtain a better resolution, they have a limitation caused by the physical property. To improve the resolution, we introduce a super-resolution method from the computer vision field. We propose tilting an area sensor to realize accurate subpixel shifting and recover a high-resolution spectrum using interpolated spectrally varying kernels. We experimentally validate that the proposed method achieved a high spectral resolution of 0.141nm in 400–800nm by just tilting the sensor in the spectrometer.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Spectral measurements are used in various fields, including food inspection, remote sensing, and astronomical spectroscopy [13]. In such applications, a higher spectral resolution is mi3often preferred because a spectrum contains fundamental information such as molecular structure.

The spectral resolution of a spectrometer is determined by the slit width, the number of mi4grooves in the grating, mi5the f-number of the imaging optics, the number of pixels contained in the sensor, and the pixel size. Conventionally, a higher spectral resolution has been achieved by improving the optical design [47]. ma1 It is difficult to improve the resolution while maintaining the measurement range by conventional approaches.

In the computer vision field, as a technology to obtain high-resolution images from low-resolution sensors, computational techniques of super-resolution have been widely investigated. Survey papers [8,9] have addressed the large body of literature on super-resolution techniques. Among them, sensor-shifting super-resolution techniques such as [10,11] are straightforward and effective. These techniques capture several images from slightly different viewpoints and computationally restore a higher-resolution image. Survey papers have also defined that the requirements for super-resolution techniques are determining how to realize subpixel sampling and how to obtain a blur kernel, which characterizes the blur based on convolution operation.We apply an image super-resolution technique to the spectrometer ma1to improve the resolution by combining the computational technique. There are two challenges to be considered when applying the super-resolution technique [9,12]. First, the low-resolution observations must either be shifted in terms of subpixel accuracy or accurately registered after the capture, where both are very hard tasks. Second, it is difficult to estimate the blur kernel in the whole measurement range. Specifically, the blur kernel in the spectrometer is spectrally varying due to the aberration of the lens and the diffraction of the grating, it is very complicated to measure the blur at mi8every wavelength in the up to 10,000 bands of the measurement range.

We propose mi9an subpixel sampling method with area sensor tilting and spectrally varying blur kernel estimation from sparse atomic emission lines. mi6A diffraction limit of the ideal grating is determined by the Rayleigh criterion [13]. Our method improves the resolution by the subpixel sampling and deblurring the impulse response of the optics. Thus, our method is possible to break the limitation of the image sensor resolution however, the Rayleigh criterion limits the resolution of our method. The Rayleigh criterion determines our method the physical limitation of the resolution. We provide the super-resolution method with an upper bound on the Rayleigh criterion. Our achievements are twofold. First, inspired by [14], we propose tilting a 2-D sensor instead of shifting a 1-D sensor to virtually achieve simple and accurate sensor shifting in one shot. Moreover, thanks to the precise pixel structure in the sensor, the measurements at adjacent rows are equivalent to the sensor shift at sub-micrometer accuracy. Second, we propose mi10to do spectrally varying blur kernel estimation from a sparse set of atomic emission lines. The atomic emission lines have mi11a precisely known wavelength with negligibly small variance. We can measure the blur kernel from the emission lines, however, the emission lines are mi12sparsely distributed along the wavelength axis. mi13Our method interpolates the blur kernel between each kernel measured from the emission lines. Blur kernels at other wavelengths are interpolated via mi14their cumulative distribution.

Figure 1 shows an overview of our spectral super-resolution method. The optical design is based on a conventional spectrometer. The differences from the conventional design are the tilting mechanism of the area sensor and utilizing additional row measurements. The pixels of the area sensor are aligned in even intervals of a few micrometers, which realizes the mi15accurate alignment of each row measurement. Finally, we deblur the aligned row measurements using an interpolated blur kernel.

 figure: Fig. 1.

Fig. 1. Overview of a super-resolution system with sensor tilting. The basic optical design is the same as that of a general spectrometer. The differences from the structure of a general spectrometer include using an area sensor instead of a linear sensor and capturing additional row measurements by tilting the sensor. The calibration step aligns each of the rows of the spectral sampling. Finally, the deblurring step deconvolves the line spread function of the wavelength from the blurred spectrum.

Download Full Size | PDF

We compare existing spectral super-resolution methods to address our study. Konishi et al. [7] proposed a super-resolution method using the Moiré effect and achieved the resolution 0.31 nm, however, this method restricts the measurement range about 1.50 nm. We experimentally presented that our prototype achieved higher spectral resolution and wider measurement range than this approach. Watanabe and Furukawa [14] proposed a sensor tilting method for Fourier transform spectrometer and achieved the resolution up to 3.7 times, however, this method is unsuitable to use the sensors with a high fill factor so that it can suffer from noise. Meanwhile, our method matches the sensors with a high fill factor, which is also better for high sensitivity, thanks to the deblurring process.

2. Sensor tilting super-resolution method

2.1 Mathematical model of the sensor tilting super-resolution

Our goal is to estimate the spectral intensity $f(\lambda )$ of mi16the average irradiance across the slit. The optics in the actual spectrometer exhibit an alignment error in the optical axis and focus, which causes a blur of the diffracted rays along the spatial direction, resulting in the observed spectrum being blurred. This blur is dominantly considered spectrally invariant; however, the optics also exhibit aberrations and diffraction errors that affect the spectrally varying blur. The spectrum blurred by the optics $\bar {f}({\lambda })$ is represented as the convolution with the blur kernel $o_{\lambda }({u})$ as

$$\bar{f}({\lambda}) = \int_{-\infty}^{\infty} o_{\lambda}({u}) f({\lambda-u}) du.$$
The optical response function $o_{\lambda }({u})$ varies depending on the wavelength $\lambda$. The spectrometer samples the spectrum with a certain pixel size. Thus, we need to consider the integration over the spectral range corresponding to the pixel size. The linear sensor contains $N$ pixels, and the $n$-th pixel intensity ${i_n}$ can be obtained as
$${i_n} = \int_{-\infty}^{\infty} p_n({\lambda}) \bar{f}({\lambda}) d\lambda,$$
where $p_n({\lambda })~(0 \leq n < N)$ is the rectangular function. This window function expresses the measurement range of the wavelength at each pixel.

Suppose that $M$ times subpixel shift corresponds to a 1-pixel shift in total. The intensity of the $n$-th pixel of the $m$-th subpixel shift can be obtained as

$${i_{m,n}} = \int_{-\infty}^{\infty} p_{m,n}({\lambda}) \bar{f}({\lambda}) d\lambda.$$
Discretizing Eq. (3),
$$\mathit{i} = \mathbf{P} \mathbf{O} \mathit{f},$$
where $\mathit {f}$ is a ${S}$-dimensional vector of the discretized spectrum, $\mathit {i}$ is a ${M}{N}$-dimensional vector of the intensity observed by ${M}\times {N}$ pixels, and $\mathbf {P}$ is the window function matrix of the pixels. The matrix $\mathbf {O}$ is the ${S} \times {S}$ convolution matrix of the blur kernel so-called line spread function (LSF). mi17Each row of the dot product of the window function $\mathbf {P}$ and blur kernel matrix $\mathbf {O}$ expresses the observation process, which models the mapping from the spectral axis onto spatial axis of the image sensor.

The $\mathit {\bar {f}}$ is the vector of the blurred spectrum, and each row of the matrix $\mathbf {O}$ is the blur kernel of the wavelength in the whole measurement range. Here, the size of matrix $\mathbf {P}$ is ${M} {N} \times {S}$. If ${S} \leq {M} {N}$, the super-resolved spectrum can be obtained as

$$\mathit{\hat{f}} = (\mathbf{P} \mathbf{O})^{+}\mathit{i},$$
where $\mathit {\hat {f}}$ is the estimate of the super-resolved spectrum and $(\mathbf {P} \mathbf {O})^{+}$ is the pseudo-inverse matrix.

Utilizing the characteristic that the intensity of the spectrum is always positive, we formulate an optimization problem with non-negative constraints represented as

$$\mathit{\hat{f}} = \mathop{\textrm{argmin}}\limits_{\mathit{f}\succeq 0} ||\mathit{i} - \mathbf{P} \mathbf{O} \mathit{f}||^{2}_2$$
As this is a convex optimization problem, the global optimum can be obtained, and there exists a solver such as [15].

2.2 Observation process estimation

There are two main factors that degrade the resolution of the spectrum. The first factor is the optical blur due to slit, lens, and grating, and the other factor is the coarse sampling of the image sensor. It is not enough to remove only the optical blur by deconvolution because the upper limit of the resolution improvement is constrained by the width of the pixel. Here, we introduce the subpixel shift to the spectrometer to achieve a resolution improvement of less than one pixel.

Now, obtaining an accurate subpixel sampling matrix $\mathbf {P}$ and blur kernel $\mathbf {O}$ is a key to achieving the super-resolution method, however the spectrometer can only measure the observation process $\mathbf {P}\mathbf {O}$. Actually, it is unnecessary to factorize the matrix $\mathbf {P}\mathbf {O}$ because the optimization process uses neither $\mathbf {P}$ nor $\mathbf {O}$ alone. In this study, we capture narrow-band spectrum at several wavelengths given by the atomic emission lines, such as a mercury-vapor (Hg-Ar) lamp [16], and interpolate them along the spectral axis to compose a ${M} {N} \times {S}$ matrix of $\mathbf {P}\mathbf {O}$ as a calibration process.

As illustrated in Fig. 2, the emission lines of the mercury-vapor lamp are sparsely distributed along the spectral axis, and it is well known that each of them corresponds to mi11a precisely known wavelength with negligibly small variance. When we measure the emission lines with our optics, the emission lines appear as a set of slanted lines on the 2-D image, as shown in Fig. 3. The image of each single line is the response of the pixels to the specific wavelength of corresponding emission line, which can be set to the corresponding column of $\mathbf {P}\mathbf {O}$.Thus, the task is to extract the image of each single line independently from the captured image. Note that we ignore some emission lines that are close to other emission lines, such as (404.656, 407.783 nm) and (576.960, 579.066 nm) in Fig. 2, since their blur kernels overlap with each other on the image. The proposed method applies a line segment detector [17] to the image and extracts the surrounding region with a predefined margin around each of the detected lines, as shown in Fig. 3. In this process, we can obtain a set of ${M} \times {N}$ images of a single emission line. The observation process is spectrally varying in practice due to the mi19chromatic aberration and the off-axis aberration. The off-axis aberration of the focusing lens effects the observations as a spectrally varying blur kernel. Additionally, each column of the image represents the transition of the observation process in sensor shifting.

 figure: Fig. 2.

Fig. 2. Atomic emission lines of mercury and argon [16]. The wavelengths of emission lines are physically determined by the atoms. The spectrum of an emission line can be regarded as an impulse in the wavelength domain.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The line detection and generating the observation process. We extract the observation process from the image that projects the emission lines. Our method applies line segment detection [17] to the captured image and extracts the surrounding region of the detected lines. The extracted images are the observation process that correspond to each wavelength of the emission lines.

Download Full Size | PDF

Thus, each image is vectorized and included as a column of $\mathbf {P}\mathbf {O}$ corresponding to the wavelength of the emission line. Since the emission lines are spectrally sparse, the above process only sparsely fills $\mathbf {P}\mathbf {O}$. Therefore, our method interpolates the obtained sparse observation process to complement the remaining wavelengths. The interpolation is independent row by row for the set of extracted images. To interpolate the observation process, we define three assumptions derived from the optics properties. First, the shape of the LSF slightly varies along wavelength. The main factor of the blur comes from the slit width, which is spectrally invariant, and the aberration of the lens mainly causes the spectrally varying blur, which has less impact. Second, the grating smoothly maps the spectral axis onto the spatial axis. Lastly, the spectral response of the sensor is smooth. Under these assumptions, the observation process mi20is smoothly varying along wavelength.

In the case of the interpolation for the observation process, the most straightforward method is a linear interpolation between two observation process measured from the emission line. When we add two observation process to compute the observation process interpolation, then the interpolated observation process has small two peaks. We require the interpolated observation process that is a similar shape to the measured observation process and the peak shifts at the interpolated position. To solve the problem, we interpolate the observation process utilizing a cumulative distribution function (CDF). We regard the spatial distribution of the pixel for the specific wavelength, which is a column of the extracted images, as a probability density function (PDF) and adopt CDF interpolation method [18]. This method computes the CDF from the measured single emission lines and linearly interpolates the CDFs along the spectral axis. Then, the interpolated PDFs are recovered from the interpolated CDFs. The PDFs are ${N}$-dimensional vectors. Finally, the interpolated observation process at any wavelength is obtained as a ${M}\times {N}$ image. This method is more promising than the direct interpolation of the each single emission line, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Observation process interpolation. A straightforward method is interpolation using a probability density function; however, this method has difficult achieving smooth interpolation. Our method adopts a cumulative distribution function interpolation method [18] for smooth interpolation.

Download Full Size | PDF

3. Simulation

We verify that the super-resolution technique can be applied to spectroscopy. We synthesize multiple lower-resolution spectra by downsampling from the raw spectrum, apply the proposed super-resolution technique to the downsampled spectra, and then compare the result to the original spectrum. Figure 5 shows the generation process of the simulation data. For the original spectrum, we measure the spectrum of sunlight in a real environment. The original spectrum is measured with 5472 pixels on a central row of an area image sensor. For downsampling, we separate the measurements into 76 groups of 72 adjacent pixels and take a summation for each of them to synthesize a spectrum measured using 76 pixels. For the sensor shifting, we move through the spectrum pixel by pixel and then obtain 72 different downsampled spectra. This process emulates sensor-shift measurements using a slightly tilted area image sensor with 76 $\times$ 72 pixels. The emulated spectrum includes the mi25quantization error. Figure 6 shows the simulation results.

 figure: Fig. 5.

Fig. 5. Simulation data generation process. We measure the raw spectrum data of 5472 pixels using our constructed spectrometer. We measure the raw spectrum data containing 5472 pixels using our constructed spectrometer. We generate the image of 5472x72 each line shifting by 0–72 pixel. We downsample the image by 1/72 to acquire the spectrum data of subpixel shifted.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Simulation result. The upper part is the spectrum of the sun in all measured wavelength. The lower part is the zoom of the absorption of the Fraunhofer lines. The orange line in the lower figure shows the center wavelength of the Fraunhofer lines. In the ground truth, the valleys can recognize clearly. The low-resolution spectrum cannot recognize the small valleys, whereas they are recovered in the super-resolved spectrum.

Download Full Size | PDF

The valleys of the spectrum correspond to the Fraunhofer lines that represent the absorption of the atoms in the air. In the range of 500 to 600 nm, some small valleys in the spectrum were blurred, and it is difficult to recognize the valleys in the low-resolution spectrum. In the super-resolved spectrum, the valleys in the spectrum were recovered from the low-resolution spectra. In this experiment, we recover a high-resolution spectrum from the low-resolution spectra, including the mi25quantization error, and confirm that the recovered result is the same as the ground truth.

4. Experiment and implementation

We perform three experiments to confirm the effectiveness of the proposed method.

4.1 Hardware implementation

We built a transmission spectrometer, as shown in Fig. 7. We put a diffuser in front of the camera lens to make the incident light from the scene uniform. The light is accumulated by a camera lens (FUJINON HF12.5HA-1B,mi2312.5 mm F/1.4–16), cut out by a slit (Thorlabs VA100C), and focused on a grating (GT25-06V mi2325.4 mm square, 600 line/mm) through a collimating lens (Thorlabs LB1471, mi2350 mm focal length, 25.4 mm diameter). The dispersed light is focused on an image sensor (FLIR BFS-U3-200S6M-C, mi235472x3648 pixel, 2.4 µm pixel pitch) through an achromatic lens (AC254-050-A-ML, mi2350 mm focal length, 25.4 mm diameter). This sensor can tilt around the optical axis that passes through the grating and focus lens. mi23 In this spectrometer, we set the f–number to F/16 of the camera lens and set the slit width to 1 µm.

4.2 Spectral super-resolution in a real environment

We evaluate the improvement of the resolution for our method and observation process estimation by recovering Hg-Ar emission lines using our constructed spectrometer. We measure the emission lines from a low-pressure Hg-Ar lamp (Ocean Insight HG-2) in this experiment. For the observation process estimation, we use the emission lines of the Hg-Ar lamp at 400 to 800 nm, excluding mi26two close line pairs, (404.656, 407.783 nm) and (576.960mi26, 579.066 nm). We compute the gradient parameter of the emission lines and estimate the sampling period of the super-resolved spectrum. mi27 Ideally, the tilting angle of the image sensor determines to the sampling period. In practice, however, it is hard to precisely rotate the sensor to the designated sampling period. Thus, we have to calibrate the sampling period from the data to determine the actual sensor rotation. In the experiment, the sensor is tilted so that the sampling period is around 6, and we acquire the M=6.17 of the sampling period obtained from the captured data. We use the $6$ lines from the raw measured image as the low resolution observations, and the range of the super-resolved spectrum is 400 to 800 nm with a sampling period of 0.015 nm. We evaluate the resolution of the spectrometer by computing the full width at half maximum (FWHM) for both blurred and super-resolved spectra.The raw measurements and the estimated results are shown in Fig. 8.

 figure: Fig. 7.

Fig. 7. Top view of the implementation. To make the incident light of the area sensor uniform, we put a diffuser in front of the camera lens. The camera lens condenses the rays from the scene. The slit blocks the horizontal rays. The collimating lens aligns the rays to parallel, and the grating disperses the rays. The focusing lens focuses the images onto the area sensor plane.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Results of the blurred and the recovered spectrum of the emission lines. Each spectrum is normalized by its maximum intensity. The full width at half maximum (FWHM) of the blurred spectrum (Blurred) and the recovered spectrum (SR) is shown around the blue box. The super-resolution results show the emission lines with observation process removal. The inset shows an enlarged view around 580 nm indicated by the green box. Two peaks can be separately identified in the recovered spectrum.

Download Full Size | PDF

We evaluate the resolution using two emission lines 576.960 and 579.066 nm that are excluded from the observation process estimation. These atomic emissions physically have certain spectra and the uncertainty of them are 0.000 10 nm [16], which is much lower than the sampling width of the super-resolved spectrum. The spectral gap of the two emissions is approximately 2 nm, however, they cannot be separated in the blurred spectrum measured by our spectrometer. The mi28FWHM of the blurred spectral lines at 546.074 nm mi28close to the two peaks is ma31.974nm, mi28which is close to the spectral resolution mi28needed to resolve the emission line pair. In contrast, the proposed method could recover the two emission lines separately in the super-resolved spectrum. The FWHM at ma3576.960 and 579.066nm were respectively 0.141 and 0.153nm. From these results, our method achieves about ma312.9 times better resolution from the raw blurred spectrum thanks to the deblurring and super-resolution. To evaluate the contributions separately, we also apply the deblurring process to the raw spectrum, which is identical to the result where $M=1$. As a result, the FWHM at 576.960 and 579.066 nm are respectively ma30.522 and ma30.403nm. This means that the subpixel sampling process contributes 3.4–3.7 times better resolution when $M=6$ ma2and the deblurring process contributes 3.7–4.8 times better resolution.

4.3 Evaluation of the observation process estimation

We evaluate the accuracy of observation process estimation by comparing the interpolated observation process with the observation process measured from the emission lines. For the evaluation, we estimate the observation process using four emission lines from the Hg-Ar lamp at 435.833, 546.074, 696.543, and 738.398 nm. We estimated observation process matrix by the CDF interpolation method for these emission lines. For the evaluation, we select five emission lines different from the emission lines which use in the interpolation method. The experiment setup is same as the Sec. 4.2. We evaluate the observation process by the distance of two peaks between the ground truth and estimated observation process, and the maximum normalized cross correlation (NCC) to confirm the similarity between the measured and estimated observation process. The ground truth of the observation process uses the emission lines of the Hg-Ar lamp. Figure 9 shows the interpolated observation process and measured observation process from the emission lines. Table 1 shows the distance between the ground truth and estimated observation process, and maximum value of the NCC.

 figure: Fig. 9.

Fig. 9. The estimated observation process $o_{\lambda }({u})$ and ground truth. We use four emission lines, 435.833 nm, 546.074 nm, 696.543 nm, 738.398 nm, to interpolate the observation process from the Hg-Ar lamp. We capture the Hg-Ar lamp for the ground truth and compare it to the interpolated observation process to evaluate the estimation accuracy.

Download Full Size | PDF

Tables Icon

Table 1. Normalized cross-correlation and distance between two peaks

The emission lines around 700 nm were highly similar to the ground truth. However, at 404.656 nm, the shape of the estimated observation process was different from the measured shape, and the peak of the estimated observation process shifts. We consider three cases. First, this observation process estimates two emission lines, 435.833 and 546.074 nm; thus, the extrapolation worsens the estimation accuracy. Second, the other emission line at 407.783 nm affects the estimation accuracy. Finally, this wavelength is close to the periphery of the lens, and the periphery causes larger distortion.

5. Conclusion

In this study, we propose a super-resolution method for enhancing spectral measurement by slightly tilting an area sensor instead of shifting a linear sensor. Our method requires a simple setup of tilting the area sensor to provide an accurate subpixel shift. The sensor shift method requires multiple measurements and it is unsuitable for the dynamic scene. In the spectroscopy, the grating disperses the incident light along the 1-D spatial axis. Thus, our method uses the area sensor and can utilize another 1-D spatial axis for multiple measurements. We construct a spectrometer and reconstruct a high-resolution spectrum from a real scene. We estimate the spectrally varying blur kernel by interpolating the observation process measured from sparse atomic emission lines using a cumulative distribution function interpolation method. We evaluate the observation process estimation by comparing the interpolated observation process and measured observation process of the emission lines. In the experiment, we evaluate the spectral resolution by FWHM and our prototype achieved the resolution to about ma20.141nm at the 400–800 nm of the measuring range, which is ma3the subpixel sampling process contributes 3.4–3.7 times better resolution when $M=6$ and the deblurring process contributes 3.7–4.8 times better resolution. The total contribution of the super-resolution is 12.9 times better resolution.

Our method ma1improve the resolution while keeping the measuring range, which is applicable to various fields. Our method also has an advantage that it can be easily applied to the conventional spectrometer by substituting a tilted area sensor for the linear sensor.

Funding

Japan Society for the Promotion of Science (JP18K04977, JP19H04138, JP19J14999, JP20K21816); Core Research for Evolutional Science and Technology (JPMJCR1764).

Disclosures

The authors declare no conflicts of interest.

References

1. J.-H. Qu, D. Liu, J.-H. Cheng, D.-W. Sun, J. Ma, H. Pu, and X.-A. Zeng, “Applications of near-infrared spectroscopy in food safety evaluation and control: A review of recent research advances,” Crit. Rev. Food Sci. Nutr. 55(13), 1939–1954 (2015). [CrossRef]  

2. E. J. Milton, “Review article principles of field spectroscopy,” Int. J. Remote. Sens. 8(12), 1807–1827 (1987). [CrossRef]  

3. F. Eisenhauer and W. Raab, “Visible/infrared imaging spectroscopy and energy-resolving detectors,” Annu. Rev. Astron. Astrophys. 53(1), 155–197 (2015). [CrossRef]  

4. S. S. Vogt, S. L. Allen, B. C. Bigelow, L. Bresee, W. E. Brown, T. Cantrall, A. Conrad, M. Couture, C. Delaney, H. W. Epps, D. Hilyard, D. F. Hilyard, E. Horn, N. Jern, D. Kanto, M. J. Keane, R. I. Kibrick, J. W. Lewis, J. Osborne, G. H. Pardeilhan, T. Pfister, T. Ricketts, L. B. Robinson, R. J. Stover, D. Tucker, J. M. Ward, and M. Wei, “HIRES: the high-resolution echelle spectrometer on the Keck 10-m Telescope,” in Instrumentation in Astronomy VIII, vol. 2198 D. L. Crawford and E. R. Craine, eds., International Society for Optics and Photonics (SPIE, 1994), pp. 362–375.

5. C. Schwab, J. F. P. Spronck, A. Tokovinin, and D. A. Fischer, “Design of the CHIRON high-resolution spectrometer at CTIO,” in Ground-based and Airborne Instrumentation for Astronomy III, vol. 7735 I. S. McLean, S. K. Ramsay, and H. Takami, eds., International Society for Optics and Photonics (SPIE, 2010), pp. 1702–1708.

6. A. Scheeline, “How to design a spectrometer,” Appl. Spectrosc. 71(10), 2237–2252 (2017). [CrossRef]  

7. T. Konishi, Y. Yamasaki, and T. Nagashima, “Super spectral resolution beyond pixel nyquist limits on multi-channel spectrometer,” Opt. Express 24(23), 26583–26598 (2016). [CrossRef]  

8. S. Cheol Park, M. Kyu Park, and M. Gi Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]  

9. K. Nasrollahi and T. B. Moeslund, “Super-resolution: A comprehensive survey,” Mach. Vision Appl. 25(6), 1423–1468 (2014). [CrossRef]  

10. M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graph. Models Image Process. 53(3), 231–239 (1991). [CrossRef]  

11. E. Michael and F. Arie, “Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images,” IEEE Trans. on Image Process. 6(12), 1646–1658 (1997). [CrossRef]  

12. P. Gege, J. Fries, P. Haschberger, P. Schötz, H. Schwarzer, P. Strobl, B. Suhr, G. Ulbrich, and W. Jan Vreeling, “Calibration facility for airborne imaging spectrometers,” ISPRS J. Photogramm. Remote. Sens. 64(4), 387–397 (2009). [CrossRef]  

13. G. R. Harrison, “The production of diffraction gratings: Ii. the design of echelle gratings and spectrographs1,” J. Opt. Soc. Am. 39(7), 522–528 (1949). [CrossRef]  

14. A. Watanabe and H. Furukawa, “Super-resolution technique for high-resolution multichannel fourier transform spectrometer,” Opt. Express 26(21), 27787–27797 (2018). [CrossRef]  

15. D. Kim, S. Sra, and I. S. Dhillon, “A non-monotonic method for large-scale non-negative least squares,” Optim. Methods Softw. 28(5), 1012–1039 (2013). [CrossRef]  

16. K. Alexander, Y. Ralchenko, R. Joseph, and NIST ASD Team, NIST Atomic Spectra Database (ver. 5.7.1), [Online]. Available: https://physics.nist.gov/asd [2020, March 16]. National Institute of Standards and Technology, Gaithersburg, MD. (2019).

17. R. Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall, “Lsd: A fast line segment detector with a false detection control,” IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010). [CrossRef]  

18. A. L. Read, “Linear interpolation of histograms,” Nucl. Instrum. Methods Phys. Res., Sect. A 425(1-2), 357–360 (1999). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Overview of a super-resolution system with sensor tilting. The basic optical design is the same as that of a general spectrometer. The differences from the structure of a general spectrometer include using an area sensor instead of a linear sensor and capturing additional row measurements by tilting the sensor. The calibration step aligns each of the rows of the spectral sampling. Finally, the deblurring step deconvolves the line spread function of the wavelength from the blurred spectrum.
Fig. 2.
Fig. 2. Atomic emission lines of mercury and argon [16]. The wavelengths of emission lines are physically determined by the atoms. The spectrum of an emission line can be regarded as an impulse in the wavelength domain.
Fig. 3.
Fig. 3. The line detection and generating the observation process. We extract the observation process from the image that projects the emission lines. Our method applies line segment detection [17] to the captured image and extracts the surrounding region of the detected lines. The extracted images are the observation process that correspond to each wavelength of the emission lines.
Fig. 4.
Fig. 4. Observation process interpolation. A straightforward method is interpolation using a probability density function; however, this method has difficult achieving smooth interpolation. Our method adopts a cumulative distribution function interpolation method [18] for smooth interpolation.
Fig. 5.
Fig. 5. Simulation data generation process. We measure the raw spectrum data of 5472 pixels using our constructed spectrometer. We measure the raw spectrum data containing 5472 pixels using our constructed spectrometer. We generate the image of 5472x72 each line shifting by 0–72 pixel. We downsample the image by 1/72 to acquire the spectrum data of subpixel shifted.
Fig. 6.
Fig. 6. Simulation result. The upper part is the spectrum of the sun in all measured wavelength. The lower part is the zoom of the absorption of the Fraunhofer lines. The orange line in the lower figure shows the center wavelength of the Fraunhofer lines. In the ground truth, the valleys can recognize clearly. The low-resolution spectrum cannot recognize the small valleys, whereas they are recovered in the super-resolved spectrum.
Fig. 7.
Fig. 7. Top view of the implementation. To make the incident light of the area sensor uniform, we put a diffuser in front of the camera lens. The camera lens condenses the rays from the scene. The slit blocks the horizontal rays. The collimating lens aligns the rays to parallel, and the grating disperses the rays. The focusing lens focuses the images onto the area sensor plane.
Fig. 8.
Fig. 8. Results of the blurred and the recovered spectrum of the emission lines. Each spectrum is normalized by its maximum intensity. The full width at half maximum (FWHM) of the blurred spectrum (Blurred) and the recovered spectrum (SR) is shown around the blue box. The super-resolution results show the emission lines with observation process removal. The inset shows an enlarged view around 580 nm indicated by the green box. Two peaks can be separately identified in the recovered spectrum.
Fig. 9.
Fig. 9. The estimated observation process $o_{\lambda }({u})$ and ground truth. We use four emission lines, 435.833 nm, 546.074 nm, 696.543 nm, 738.398 nm, to interpolate the observation process from the Hg-Ar lamp. We capture the Hg-Ar lamp for the ground truth and compare it to the interpolated observation process to evaluate the estimation accuracy.

Tables (1)

Tables Icon

Table 1. Normalized cross-correlation and distance between two peaks

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

f ¯ ( λ ) = o λ ( u ) f ( λ u ) d u .
i n = p n ( λ ) f ¯ ( λ ) d λ ,
i m , n = p m , n ( λ ) f ¯ ( λ ) d λ .
i = P O f ,
f ^ = ( P O ) + i ,
f ^ = argmin f 0 | | i P O f | | 2 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.