Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spectral acquisition method based on axial chromatic and spherical aberrations of lens

Open Access Open Access

Abstract

In this study, a spectral acquisition method is proposed in which axial chromatic and spherical aberrations are introduced as an error function. These aberrations lead to changes in the focal lengths as the wavelengths of the incident light changes. A coefficient matrix representing the variation in the intensity distribution of each image, formed at the focal point (the detection position) corresponding to a wavelength, is obtained by calibration. The least square method is used to reconstruct the spectrum. The numerical simulation results show that the spectral correlation coefficient and the spectral mean square error between the reconstructed spectrum and the original spectrum are 0.9997 and 0.0025, and 0.9683 and 0.0204, respectively, for the polychromatic light spectrum obtained from the mercury lamp using our experimental set-up. These results confirm the feasibility and efficiency of the proposed spectral imaging method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The spectral imaging technique is widely used for the identification of materials and studying their properties. Spectral imaging was originally used in the exploration of geological and mineral resources [1]. The recent advancement in the imaging technology has extended the application of this technique to several other fields such as environmental monitoring [2], agriculture [3], intelligence and surveillance [4], and criminal investigation [5]. At present, the optical design of the spectral imaging systems is primarily based on four methods; prism-dispersion [6], grating dispersion [7,8], filter [9–11], and interference [12–15]. A spectrometer based on the dispersion process is relatively more efficient as compared to other methods and is extensively used in the aviation and space research. The traditional prism-dispersion method is used to spread the spectrum into an array of constituent spectral colors along the vertical axis. The detector in such systems can only detect the energy of 1/N (N is the number of spectrum) of the target point, so that the signal-to-noise ratio of the data collected by the detector is relatively low. However, the correction of the off-axial aberration in such systems is a cumbersome process due to the complexity of the optical system, and it involves high costs. In grating-based dispersion systems, the incident slits are used that reduce the luminous flux and the energy utilization of the system.

Several researchers have exploited chromatic aberration in optical systems for various applications [16–20]. A chromatically dispersive optical system was proposed for hyperspectral imaging applications that employ a focusing optics with color-corrected tilted focal plane, where each spectral component intercepts at the correct focal distance depending on the wavelength [17]. Another study shows that lens thickness does not influence chromatic variation of the effective focal length for a convex-plano or a plano-convex lens. The influence of thickness of lens center on the chromatic focal length variation is more pronounced for lower indices of refraction [16]. Recently, Ojaghi et al. reported an ultraviolet hyperspectral interferometric (UHI) microscopy technique that utilizes chromatic aberrations to obtain a stack of through-focus intensity images at various wavelengths. An iterative solution of the transport of intensity (TIE) equation is then used to recover the phase information from these through-focus images and produce in-focus images at all wavelengths without moving the sample or objective. The proposed configuration reduces the cost and instrumental complexity significantly, while enabling a fast, wide-area imaging process with a better photon efficiency for bio-medical applications [18]. Volkova et al. presented a method for measuring the distance to photographed objects utilizing the axial chromatic aberration caused by the lens and the degree of defocusing of the photograph based on the color range used during shooting. The object distances were calculated using the local analysis of the spatial Fourier spectrum of the images [19]. In addition, Faggiano et al. presented the design of a new spectroscope based on axial chromatic aberration [20]. Further, Moselini et al. introduced a controlled axial chromatic aberration in the optical imaging system, using a lens with a back focal length (BFL) varying monotonically with wavelength and forming a rainbow of axially dispersed foci. Such a system can be used for optical metrology and distance-sensing applications [21]. However, the study reported in these references [20,21] lacked complete spectrum analysis. Later on, a diffractive optical element (DOE)-based spectrometer for imaging and dispersion in the visible and IR regions was proposed. This spectrometer uses a monochromatic charge-coupled device (CCD) to capture the in-focus and defocused images, and computer tomography techniques are employed for processing these images [22]. Hinnrichs et al. proposed the theory behind the image multi-spectral sensing used in the hyperspectral cameras, which are developed by Pacific Advanced Technology (PAT) in the past several years. The hyperspectral camera is essentially a dispersive spectrometer that uses a single diffractive optical element for both imaging and dispersion. The lens is tuned for a single wavelength, therefore providing a maximum diffraction efficiency at that particular wavelength and high efficiency throughout the spectral band-pass of the camera [23]. Although the DOE-based systems are efficient, these systems are difficult to manufacture due to a complex set-up.

In this paper, we present a model and theory of a cost-efficient dispersive imaging system that uses lens as the dispersive component. The multi-fold advantages of using lens in our system are as follows: it is an inexpensive component, provides better luminous efficiency compared to the grating-based systems, and maintains a better signal-to-noise ratio compared to the prism-based systems. In our model, the spectral data are collected only along the optical axis; therefore, the effect of off-axial aberrations is minimal in this system, and this can greatly reduce the complexity of the experimental set-up. The proposed method exploits the axial chromatic aberrations by selecting the detection positions as the focal points corresponding to each spectral wavelength. The axial chromatic and spherical aberrations were used as an error function in our model for reconstructing the spectra. The spectral information is analyzed in the following steps: i) the intensity distributions corresponding to twenty selected wavelengths in the range of 480–780 nm at an interval of 20 nm at their respective focal points (detection positions) are simulated, ii) the intensity distributions as a function of various detection (focal) positions corresponding to the selected wavelength from the polychromatic light are simulated, iii) the coefficients of the discrepancy matrix are calculated by analyzing the results obtained in step i) and ii) using the proposed model, and iv) the resultant matrix equation is solved by the least square method for reconstructing the spectra.

2. Modeling of the system

In an imaging system, the axial chromatic aberrations and the spherical aberrations tend to diffuse a spot to a larger size than the Airy disk. In this work, we have developed a cost-efficient and simple spectral acquisition method that utilizes the axial chromatic aberration to eliminate the spherical aberration and defocus associated with it, as discussed in the previous section [24–28], by introducing an error function to the imaging model.

At first, we briefly introduce the axial chromatic aberration in our system using a lens that creates an array of focal points on the optical axis for various wavelengths of the incident light from a collimated polychromatic source (Fig. 1). The lens is located in the x-y plane; therefore, the focal plane coordinates corresponding to the wavelength λj are noted as (xj, yj). The optical axis is the z-axis. The center of the lens is chosen as the origin of the coordinate system. Therefore, the detected positions of the focal points are given as zj, which represent the respective focal length f(λj) for the wavelength, λj.

 figure: Fig. 1

Fig. 1 Illustration of chromatic aberration.

Download Full Size | PDF

Owing to the axial chromatic aberration, defocus of an image occurs when the detection device is not placed at the focal plane of the incident monochromatic light. The defocus can be described using Eq. (1).

δdefocus=[1zj1f(λ)],
where f(λ) is the focal length of the incident monochromatic light of wavelength λ, and zj is the detection position. The defocus leads to a larger spot size, which degrades the image quality of an optical imaging system.

In addition to the axial chromatic aberration, the spherical aberration causes the monochromatic incident light on the lens pupil with different aperture radii to focus at different positions on the optical axis as shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Illustration of spherical aberration.

Download Full Size | PDF

The spherical aberration for a given incident monochromatic wavelength can be described by Eq. (2).

δsph,λ=fλ(r1)fλ(r2)
where fλ(r1) and fλ(r2) are the focal lengths of the incident light corresponding to the aperture radii of r1 and r2, respectively. Usually, the first order and the second order spherical aberration have major contributions in most of the optical systems. Therefore, the contributions from the first- and the second-order spherical aberration δsph,λ are shown in Eq. (3).
δsph,λ=C1r12+C2r14=C1(x2+y2)+C2(x2+y2)2,
where C1 and C2 are the primary and the secondary spherical aberration coefficient, respectively, (x, y) is the coordinate of the incident point on the lens pupil, and r1 = (x2 + y2)1/2.

Owing to the axial spherical aberration, monochromatic light forms a diffuse spot at the image plane located at z = fλ(r2), which can be described as

δvsph,λ=δsph,λtanU2=[C1(x2+y2)+C2(x2+y2)2]tanx2+y2fλ(r2),
where U2 is the aperture angle.

As a result, the error function P'(x, y, zj), associated with the axial chromatic and spherical aberrations, can be given by Eq. (5).

P'(xj,yj,zj)=P(x,y,z)(δdefocus+δsph,λ+δvsph,λ)=P(x,y,z){[1zj1f(λ)]+[C1(x2+y2)+C2(x2+y2)2](1+tanx2+y2fλ(r))},
where P(x, y, z) is the pupil function of the lens.

For a monochromatic light, the amplitude distribution of the imaged spot is a periodic function in the free space. However, the amplitude of the light wave is modulated due to the limitation posed by the lens aperture, yielding a more complicated amplitude distribution function, as the light transmits through the lens [29]. A representative amplitude distribution of a collimated monochromatic light at one of the focal points is shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Monochromatic diffraction, (a) is the 2D view of diffuse spot, (b) is the 3D view of diffuse spot.

Download Full Size | PDF

We apply the error function (from Eq. (5)) in the Fresnel diffraction model to obtain the intensity distribution of a monochromatic light on the detection plane. Then, the incident wave field at the detection plane zj for a wavelength λi is given by Eq. (6).

E(xj,yj,zj,λi)=P'(xj,yj,zj)Aiλizj2exp[iki2zj(x2+y2)]exp[iki(xjzjx+yjzjy)]dxdy,
where A is the amplitude of the incident monochromatic light, i denotes the index of incident wavelength, and j denotes the detection position.

The intensity distribution for various incident wavelengths at their respective detection positions can be obtained by using Eq. (7).

Ii,j(xj,yj)=|E(xj,yj,zj,λi)|2,
where Ii,i(xj, yj) denotes the intensity distribution of the focal plane of λi obtained at j = i and E is incident wave field.

Thus, a discrepancy coefficient, ai,j, is introduced in our model (Eq. (8)) to quantitatively analyze the deviations in the intensity distribution for the images formed at different locations.

ai,j=max(Ii,j(xj,yj))max(Ii,i(xj,yj)),
where ai,j is the discrepancy coefficient.

Furthermore, for comparison of the intensity distributions, the normalized intensity (NI) is defined in Eq. (9).

NIi,j=Ii,j(xj,yj)max(Ii,i(xj,yj)),

In order to demonstrate the effects of discrepancy coefficient ai,j, we selected values of ai,j as 1, 0.8, 0.5, and 0.2 for our simulations. The value, ai,j = 1 corresponds to the case when the detection position is same as the focal position. Figure 4 shows the simulated normalized intensity distribution for the selected values of ai,j. The simulation results shows that the maximum height of the normalized intensity peaks decreases significantly with an increase in the ai,j values.

 figure: Fig. 4

Fig. 4 Simulated normalized intensity curves for selected discrepancy coefficients.

Download Full Size | PDF

Since a polychromatic light is made up of a series of monochromatic waves of different wavelengths, the intensity distribution of a polychromatic light source can be considered as the summarization of the monochromatic intensities associated with each constituent wavelength. Therefore, the intensity distribution of a polychromatic light, Pj, at different detection positions can be described as:

Pj=I1,j+I2,j+Ii,j,

The intensity distribution matrix (Eq. (11)) for the polychromatic light can be expressed in terms of the discrepancy coefficients, ai,j and the spectral intensities, I = [Iλ1, Iλ2, , Iλi] collected at the respective detection positions as shown below:

[P1P2Pj1Pj]=[a1,1a1,2a1,j1a1,ja2,1a2,2a2,j1a2,jai1,1ai1,2ai1,j1ai1,jai,1ai,2ai,j1ai,j][Iλ1Iλ2Iλi1Iλi]=AI,

The spectral information can be obtained by solving Eq. (11) by using the least squares method [30]

minIAI-P22=(AIP)T(AI-P),
Equation (12) can be expanded as:
φ(I)=ITATAIITATPPTAI+PTP,
The derivative of φ (I) in Eq. (13) yields
dφdI=2ATAI2ATP,
The solution for the matrix I, as shown in Eq. (15), is obtained by setting the derivative from Eq. (14) to zero (minima condition). It is evident from Eq. (15) that rank(A) = j, where ATA is a non-singular matrix.

I=(ATA)1ATP,

3. Simulation

In order to demonstrate the effect of axial chromatic aberration using our model, we simulate the images of collimated monochromatic light at different imaging positions. We chose 20 wavelengths, denoted as λ1–λ20, in the range of 480–780 nm at an interval of 20 nm, and the corresponding focal points of these wavelengths are denoted as z1z20. The diameter of the imaging lens used in our simulation is 50.8 mm. Considering that the aperture of the lens is relatively small, we only used the primary spherical aberration coefficient, C1 = 0.0571, in our calculations.

The simulation results of the monochromatic collimated light with wavelength range: λ1λ20, imaged at position z1 are shown in Fig. 5. Since the position z1 is the focal length of the lens for the wavelength λ1, the defocus of light increases as the wavelength increases, according to Eq. (1). As evident from Fig. 5, the diameter of the diffuse spots increases as the wavelength increases. Additionally, the intensity of the spot center decreases with an increase in the wavelength.

 figure: Fig. 5

Fig. 5 Simulated results for λ1λ20 light imaged at the focal point, z1. The top portion shows the 2D view of the diffuse spots, from left to right corresponding to wavelengths, and the bottom portion shows the normalized intensity at the central point of the respective diffuse spots.

Download Full Size | PDF

Furthermore, we simulated the diffraction images at the detection positions, z1z20 for a specific wavelength, λ1.The detection positions z1z20 correspond to the focal points of respective wavelengths, f(λ1)f(λ20). As the wavelength increases, the distance between the focal points of the adjacent wavelengths decreases as shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Simulated results for λ1 light imaged at different detection positions, z1z20. The top portion shows the 2D view of diffuse spots, from left to right corresponding to different detection positions, and the bottom portion shows the normalized intensity at the spot center.

Download Full Size | PDF

The simulation results for a polychromatic light with 20 selected wavelengths at an interval of 20 nm, are shown in Fig. 7. Additionally, the normalized intensities of the original spectrum of the polychromatic light for these selected wavelengths are simulated as shown in Fig. 8. The normalized intensity distributions based on the proposed model are generated by using Eq. (10). The simulated data elements of the matrix, P and the corresponding components of the discrepancy matrix, A can be used for obtaining the reconstructed spectrum of the polychromatic light as shown in Fig. 8.

 figure: Fig. 7

Fig. 7 Simulated results of the polychromatic light imaged at z1z20. The top portion is the 2D view of diffuse spots, from left to right correspond to different positions; the bottom portion shows the normalized intensity at the central position of each diffuse spot.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Simulated original and the reconstructed spectrum with the normalized intensities corresponding to the selected wavelengths.

Download Full Size | PDF

The spectrum reconstructed by the method proposed in this article and the original spectrum are shown in Fig. 8. The similarity between the original and the reconstructed spectra using the spectral correlation coefficient (SCC) (in Eq. (16)) was found to be 0.9997 [31].

SCC(u,v)=i=1n(uiu¯)(viv¯)i=1n(uiu¯)2i=1n(viv¯)2,
where u are the original spectrum data and v are the restored data; n is the total number of data points;u¯andv¯are the averages of the original and the reconstructed data, respectively.

To evaluate the similarity of the spectra, we also calculated the spectral mean square error (SMSE) using Eq. (17) [32] for our simulated data and a value of 0.0025 was obtained.

SMSE(u,v)=i=1n(uivi)2n2.

4. Experiment

The experimental set-up with a monochromatic source, a polychromatic source, a beam splitter, a filter, a lens, and a CCD camera with the translation stage are shown in Fig. 9. The beam splitter is used to ensure the invariance of the experimental set-up, when switching the light sources during experiments. The filter is used to limit the spectral range that is detected by the CCD camera. The collimating system generates a parallel light beam before the incident light reaches the lens surface. The CCD camera is placed on the translation stage and records images while the stage is moved along the optical axis.

 figure: Fig. 9

Fig. 9 Illustration of the experimental set-up.

Download Full Size | PDF

The actual experimental set-up is shown in Fig. 10. A mercury lamp is used as the polychromatic light source and a monochromator is used to produce monochromatic light beam in the experiment. The transmission band of the filter is in the range of 400–630 nm. The lens has a diameter of 50.8 mm and a focal length, f = 200 mm. The motion range of the translation stage is 10 mm with the step size of 1 μm. The experiment was carried out in a clean room at constant temperature and humidity conditions. The special measures were taken to control the operating temperature of the CCD camera; therefore, the camera-induced errors in our measurements are negligible.

 figure: Fig. 10

Fig. 10 Actual experimental set-up, the collimating systems are replaced by a collimator.

Download Full Size | PDF

The experimental details, using the set-up shown in Fig. 9, are as follows. The measurements were performed for eight selected monochromatic wavelengths, namely 520 nm, 530 nm, 540 nm, 550 nm, 560 nm, 570 nm, 580 nm, and 590 nm, and these were generated by the monochromator. The CCD mounted on the translation stage was then employed to collect the images at each focal point (detection position) corresponding to the wavelength. The process was repeated, until all the images corresponding to each wavelength were obtained. The exposure time for each detection was set as 9.25 ms. To record the images of the polychromatic light from the mercury lamp, the CCD was placed at the focal point of one wavelength and then moved to other focal points of the remaining wavelengths using the translation stage.

The images of monochromatic light recorded at different detection positions are shown in Fig. 11. Using these intensity distributions the images for the eight selected wavelengths of the monochromatic light, the coefficients of matrix A are calculated, as given below

 figure: Fig. 11

Fig. 11 Diffraction images of monochromatic light at different detection positions.

Download Full Size | PDF

A=[10.60870.33930.25830.17020.14810.13980.13050.734110.39200.23280.19070.16940.16020.14480.13020.466110.70730.36070.22440.19120.16290.07260.03700.01950.01280.01130.23870.09520.03600.02200.01730.65210.28580.08720.03600.028610.73980.23670.08120.06060.441910.53340.20680.15180.26740.436510.54930.42670.20870.31270.703410.78270.17050.27810.40720.93341]

Figure 12 shows the intensity distribution of the polychromatic light from the mercury lamp at eight detection positions, located at the focal points of each constituent wavelength.

 figure: Fig. 12

Fig. 12 Diffraction images of the polychromatic light from the mercury lamp at different detection positions.

Download Full Size | PDF

Using the experimental data to evaluate the intensity distributions of the polychromatic light at different imaging positions, we obtained the coefficients Pj. The spectrum of the mercury lamp was restored following the procedure discussed in section 3 by using the least square method.

We used a calibrated fiber spectrometer (Ocean optics) to measure the spectrum of the mercury lamp. The measured spectrum is used as the original spectrum for comparison purposes. The measured (original) and the reconstructed spectra are shown in Fig. 13. The similarity between the original spectrum and the reconstructed spectrum is evaluated by using SCC criterion and the SMSE calculations. The results show that SCC = 0.9683, and SMSE = 0.0204 for our experimental data.

 figure: Fig. 13

Fig. 13 Reconstructed spectrum and original spectrum for the polychromatic light from the mercury lamp used in experiment.

Download Full Size | PDF

5. Summary

In this paper, a cost-efficient spectral acquisition method—including the axial chromatic and spherical aberrations of a lens—is proposed. An experimental set-up was built to obtain the spectra for both, the monochromatic and the polychromatic light. The mercury lamp was used as a polychromatic light source in the experiment, and a monochromator with a filter was used with the mercury lamp to generate monochromatic light of various wavelengths. The simulations and experiments were conducted to test and verify the performance of the proposed method. We used SCC and SMSE to evaluate the quality of the reconstructed spectrum obtained using the proposed method. The values of SCC and SMSE were estimated as 0.9997 and 0.0025, respectively, for the spectrum simulated, and the values of 0.9683 and 0.0204, respectively, were obtained for the spectrum generated using polychromatic light from the mercury lamp in our experiments.

Funding

National Natural Science Foundation of China (NSFC,61635002); Natural Science Foundation of Beijing Municipality (4172038); Fundamental Research Funds for the Central Universities.

Acknowledgments

Peidong He and Lijuan Su are co-first authors. We are grateful to the editors and reviewers. Their advice helped us improve the quality and readability of this paper.

References

1. J. R. Jensen, Remote sensing of the environment: An earth resource perspective (Pearson Education India, 2009).

2. I. G. E. Renhorn, D. Bergström, J. Hedborg, D. Letalick, and S. Möller, “High spatial resolution hyperspectral camera based on a linear variable filter,” Opt. Eng. 55(11), 114105 (2016). [CrossRef]  

3. G. Thomas and G. Laurenz, “Synthetic Approaches Towards CGA 293′343: A Novel Broad-Spectrum Insecticide,” Pestic. Sci. 55(3), 355–357 (2010).

4. D. Stein, J. Schoonmaker, E. Coolbaugh, “Hyperspectral Imaging for Intelligence, Surveillance, and Reconnaissance”, Space and Naval Systems Warfare Center (SSC) San Diego Biennial Review, 108–116 (2001).

5. L. Perez-Freire, “Spread-spectrum watermarking security,” IEEE TIFS 4(1), 2–24 (2009).

6. M.-L. Junttila, J. Kauppinen, and E. Ikonen, “Performance limits of stationary Fourier spectrometers,” J. Opt. Soc. Am. A 8(9), 1457–1462 (1991). [CrossRef]  

7. M. Özcan and B. Sardari, “Broadband and High-Resolution Static Fourier Transform Spectrometer with Bandpass Sampling,” Appl. Spectrosc. 72(7), 1116–1121 (2018). [CrossRef]   [PubMed]  

8. I. Renhorn, V. Achard, M. Axelsson, K. Benoist, D. Borghys, X. Briottet, R. Dekker, A. Dimmeler, O. Friman, I. Kåsen, S. Matteoli, M. L. Moro, T. O. Opsahl, M. van Persie, S. Resta, H. Schilling, P. Schwering, M. Shimoni, T. V. Haavardsholm, and F. Viallefont, “Hyperspectral reconnaissance in urban environment,” Proc. SPIE 8704, 87040L (2013). [CrossRef]  

9. D. Romanini, I. Ventrillard, G. Méjean, J. Morville, E. Kerstel “Introduction to cavity enhanced absorption spectroscopy,” Cavity-Enhanced Spectroscopy and Sensing. Springer 179, 1–60 (2014).

10. J. Antila, R. Mannila, U. Kantojärvi, C. Holmlund, A. Rissanen, I. Näkki, J. Ollila, and H. Saari, “Spectral imaging device based on a tuneable MEMS Fabry-Perot interferometer,” Proc. SPIE 8374, 83740F (2012). [CrossRef]  

11. N. Gupta, “Hyperspectral imager development at Army Research Laboratory,” Proc. SPIE 6940, 69401P (2008). [CrossRef]  

12. M. Dami, R. De Vidi, G. Aroldi, F. Belli, L. Chicarella, A. Piegari, A. Sytchkova, J. Bulir, F. Lemarquis, M. Lequime, L. Abel Tibérini, and B. Harnisch, “Ultra compact spectrometer using linear variable filters,” International Conference on Space Optics ISOP-10565 (2018). [CrossRef]  

13. C. Zhang, B. Zhao, and B. Xiangli, “Wide-field-of-view polarization interference imaging spectrometer,” Appl. Opt. 43(33), 6090–6094 (2004). [CrossRef]   [PubMed]  

14. R. Shang, S. Chen, C. Li, and Y. Zhu, “Spectral modulation interferometry for quantitative phase imaging,” Biomed. Opt. Express 6(2), 473–479 (2015). [CrossRef]   [PubMed]  

15. L. Su, Y. Yuan, B. Xiangli, F. Huang, J. Cao, L. Li, and S. Zhou, “Spectrum reconstruction method for airborne temporally–spatially modulated Fourier transform imaging spectrometers,” IEEE Trans. Geosci. Remote Sens. 52(6), 3720–3728 (2014). [CrossRef]  

16. S. Sparrold, “Thick Lens Chromatic Effective Focal Length Variation Versus Bending,” in Optical Design and Fabrication 2017 (Freeform, IODC, OFT), (Optical Society of America, paper IM3A.4 (2017).

17. K. R. Castle, “Hyperspectral imaging using linear chromatic aberration,” US patent, No.: US 6,552,788 B1(2003).

18. A. Ojaghi and F. E. Robles, “Ultraviolet multi-spectral microscopy using iterative phase-recovery from chromatic aberrations,” Proc. SPIE 10087, 100870M (2019). [CrossRef]  

19. M. A. Volkova, V. R. Lutsiv, L. S. Nedoshivina, and A. A. Ivanova, “Using the effect of longitudinal chromatic aberration for measuring distances from a single color photograph,” J. Opt. Technol. 86(1), 42–47 (2019). [CrossRef]  

20. A. Faggiano, C. Gadda, P. Moro, G. Molesini, and F. Quercioli, “Longitudinal Chromatic Aberration Spectroscope,” Proc. SPIE 656, 213–219 (1986).

21. G. Molesini and F. Quercioli, “Pseudocolor effects of longitudinal chromatic aberration,” J. Opt. 17(6), 279–282 (1986). [CrossRef]  

22. D. M. Lyons, “Image spectrometry with a diffractive optic,” Proc. SPIE 2480, 123–131 (1995). [CrossRef]  

23. M. Hinnrichs and M. A. Massie, “New approach to imaging spectroscopy using diffractive optics,” Proc. SPIE 3118, 194–205 (1997). [CrossRef]  

24. K. Lizuka, Engineering Optics (Springer, 2010).

25. M. Gu and C. J. R. Sheppard, “Effects of defocus and primary spherical aberration on three-dimensional coherent transfer functions in confocal microscopes,” Appl. Opt. 31(14), 2541–2549 (1992). [CrossRef]   [PubMed]  

26. S. I. Wang and B. R. Frieden, “Effects of third-order spherical aberration on the 3-D incoherent optical transfer function,” Appl. Opt. 29(16), 2424–2432 (1990). [CrossRef]   [PubMed]  

27. V. N. Mahajan, “Aberrated point-spread functions for rotationally symmetric aberrations,” Appl. Opt. 22(19), 3035 (1983). [CrossRef]   [PubMed]  

28. A. R. Fitzgerrell, E. R. Dowski Jr., and W. T. Cathey, “Defocus transfer function for circularly symmetric pupils,” Appl. Opt. 36(23), 5796–5804 (1997). [CrossRef]   [PubMed]  

29. K. R. Sui, X. S. Zhu, X. L. Tang, K. Iwai, M. Miyagi, and Y. W. Shi, “Method for evaluating material dispersion of dielectric film in the hollow fiber,” Appl. Opt. 47(34), 6340–6344 (2008). [CrossRef]   [PubMed]  

30. F. Zhang, Matrix Analysis and Applications(Tsinghua University, 2017).

31. Z. Phillips, V. E. Kim, and J. G. Kim, “Preliminary Study of Gender-Based Brain Lateralization Using Multi-Channel Near-Infrared Spectroscopy,” J. Opt. Soc. Korea 19(3), 284–296 (2015). [CrossRef]  

32. L. Zhang, D. Liang, D. Zhang, X. Gao, and X. Ma, “Study of Spectral Reflectance Reconstruction Based on an Algorithm for Improved Orthogonal Matching Pursuit,” J. Opt. Soc. Korea 20(4), 515–523 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Illustration of chromatic aberration.
Fig. 2
Fig. 2 Illustration of spherical aberration.
Fig. 3
Fig. 3 Monochromatic diffraction, (a) is the 2D view of diffuse spot, (b) is the 3D view of diffuse spot.
Fig. 4
Fig. 4 Simulated normalized intensity curves for selected discrepancy coefficients.
Fig. 5
Fig. 5 Simulated results for λ1λ20 light imaged at the focal point, z1. The top portion shows the 2D view of the diffuse spots, from left to right corresponding to wavelengths, and the bottom portion shows the normalized intensity at the central point of the respective diffuse spots.
Fig. 6
Fig. 6 Simulated results for λ1 light imaged at different detection positions, z1z20. The top portion shows the 2D view of diffuse spots, from left to right corresponding to different detection positions, and the bottom portion shows the normalized intensity at the spot center.
Fig. 7
Fig. 7 Simulated results of the polychromatic light imaged at z1z20. The top portion is the 2D view of diffuse spots, from left to right correspond to different positions; the bottom portion shows the normalized intensity at the central position of each diffuse spot.
Fig. 8
Fig. 8 Simulated original and the reconstructed spectrum with the normalized intensities corresponding to the selected wavelengths.
Fig. 9
Fig. 9 Illustration of the experimental set-up.
Fig. 10
Fig. 10 Actual experimental set-up, the collimating systems are replaced by a collimator.
Fig. 11
Fig. 11 Diffraction images of monochromatic light at different detection positions.
Fig. 12
Fig. 12 Diffraction images of the polychromatic light from the mercury lamp at different detection positions.
Fig. 13
Fig. 13 Reconstructed spectrum and original spectrum for the polychromatic light from the mercury lamp used in experiment.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

δ defocus =[ 1 z j 1 f( λ ) ],
δ sph,λ = f λ ( r 1 ) f λ ( r 2 )
δ sph,λ = C 1 r 1 2 + C 2 r 1 4 = C 1 ( x 2 + y 2 )+ C 2 ( x 2 + y 2 ) 2 ,
δ vsph,λ = δ sph,λ tan U 2 =[ C 1 ( x 2 + y 2 )+ C 2 ( x 2 + y 2 ) 2 ]tan x 2 + y 2 f λ ( r 2 ) ,
P'( x j , y j , z j )=P(x,y,z)( δ defocus + δ sph,λ + δ vsph,λ ) =P(x,y,z){ [ 1 z j 1 f( λ ) ] +[ C 1 ( x 2 + y 2 )+ C 2 ( x 2 + y 2 ) 2 ]( 1+tan x 2 + y 2 f λ ( r ) ) },
E( x j , y j , z j , λ i )=P'( x j , y j , z j ) A i λ i z j 2 exp[ i k i 2 z j ( x 2 + y 2 ) ] exp[ i k i ( x j z j x+ y j z j y) ] dxdy,
I i,j ( x j , y j )= | E( x j , y j , z j , λ i ) | 2 ,
a i,j = max( I i,j ( x j , y j )) max( I i,i ( x j , y j )) ,
N I i,j = I i,j ( x j , y j ) max( I i,i ( x j , y j )) ,
P j = I 1,j + I 2,j + I i,j ,
[ P 1 P 2 P j1 P j ]=[ a 1,1 a 1,2 a 1,j1 a 1,j a 2,1 a 2,2 a 2,j1 a 2,j a i1,1 a i1,2 a i1,j1 a i1,j a i,1 a i,2 a i,j1 a i,j ][ I λ1 I λ2 I λi1 I λi ]=AI,
min I AI-P 2 2 = ( AIP ) T ( AI-P ),
φ( I )= I T A T AI I T A T P P T AI+ P T P,
dφ dI =2 A T AI2 A T P,
I= ( A T A ) 1 A T P,
SCC(u,v)= i=1 n ( u i u ¯ )( v i v ¯ ) i=1 n ( u i u ¯ ) 2 i=1 n ( v i v ¯ ) 2 ,
SMSE( u,v )= i=1 n ( u i v i ) 2 n 2 .
A=[ 1 0.6087 0.3393 0.2583 0.1702 0.1481 0.1398 0.1305 0.7341 1 0.3920 0.2328 0.1907 0.1694 0.1602 0.1448 0.1302 0.4661 1 0.7073 0.3607 0.2244 0.1912 0.1629 0.0726 0.0370 0.0195 0.0128 0.0113 0.2387 0.0952 0.0360 0.0220 0.0173 0.6521 0.2858 0.0872 0.0360 0.0286 1 0.7398 0.2367 0.0812 0.0606 0.4419 1 0.5334 0.2068 0.1518 0.2674 0.4365 1 0.5493 0.4267 0.2087 0.3127 0.7034 1 0.7827 0.1705 0.2781 0.4072 0.9334 1 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.