Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D camera based on laser light absorption by atmospheric oxygen at 761 nm

Open Access Open Access

Abstract

A 3D camera based on laser light absorption of atmospheric oxygen at 761 nm is presented. The camera uses a current-tunable single frequency distributed feedback laser for active illumination and a silicon-based image sensor as a receiver. This simple combination enables capturing 3D images with a compact and mass producible set-up. The 3D camera is validated in indoor environments. Distance accuracy of better than 4 cm is demonstrated between 4 m and 10 m distances. Future potential and improvements are discussed.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

3D cameras are increasingly needed in a variety of applications where automation and mobile platforms are used, such as in logistics and factories. Specifically, there is an increasing demand for short to medium range 3D cameras to be used in low-cost robot platforms. The go-to solutions for 3D perception have been mainly light detection and ranging (lidar) systems, which use time-of-flight (ToF) of a laser pulse to determine distance [1,2]. These devices can be scanning single point system or imaging systems based on ToF sensor array. Passive systems include stereo vision cameras, where the distance is acquired by triangulation [1,3]. Commercially available scanning lidars can achieve impressively thick point clouds with high frame rate by combining multiple laser units on a rotary surface. Furthermore, these systems offer very high field-of-view due to the rotation motion. Commercial devices are, however, expensive and scaling for high volume applications has been difficult due to the use mechanical scanners. Commercially available low-cost scanning lidars typically have significantly lower number of points in the 3D point cloud. More recently commercial ToF sensor array-based 3D cameras have emerged on the market. ToF cameras have no moving parts and they can be a suitable alternative to scanning lidars. On the other hand, stereovision cameras can provide a very high-resolution 3D image of a wide scene with a lower price point. Also these devices do not contain any moving parts and the cost per pixel can be lower than comparable price point lidars. Both lidar sensors and stereo vision cameras have varying degrees of performance in bright ambient lighting conditions. Some stereovision cameras use active pattern projection for increasing the accuracy of the 3D imaging especially in dark environments Performance of lidar devices is generally affected by bright ambient illumination, due to sensitive detectors used. Both stereo cameras and lidars have their justified use cases and the choice typically depends on the case requirements. Therefore, there is room for novel solutions in 3D imaging, especially for short to medium range (0–10 m) with lower price point.

Atmospheric absorption provides a means for ranging. For example, a passive single point ranging system based on optical band pass filters, utilizing oxygen absorption lines has been demonstrated [410]. Similarly, mid-infrared wavelengths have been used for the same purpose but using the absorption of light by atmospheric carbon dioxide [11]. As for active systems, a one-way system has been proposed, that utilizes a single frequency distributed feedback laser (DFB) to probe the absorption line of atmospheric oxygen [12]. More recently, telecom wavelengths have been similarly utilized to demonstrate active single point ranging system by measuring ambient water vapor absorption using DFB laser [13,14].

So far, no 3D imaging systems have been demonstrated based on atmospheric absorption. The atmospheric absorption of oxygen presents an interesting solution for depth imaging, since both the laser and the required imaging sensor (Silicon-based CMOS or CCD) are today widely available and mass producible. Instead of a single point measurement, as presented in the earlier work [12,13], the DFB laser can be used to illuminate a wide area that is then imaged with a simple imaging sensor. Since the approach is based on the use of laser illumination, the sensor could operate in dark conditions. Similar to lidars, the narrow illumination wavelength band allows efficient filtering of ambient light. Furthermore, the continuous wave output of the laser could allow the use of other modulation and detection techniques that could further improve the immunity to ambient light. This approach can result in very simple and robust 3D imaging device, using relatively simple electronics for operating both laser and the image sensor.

In this work, we demonstrate a 3D camera based on laser light absorption by atmospheric oxygen. We use direct absorption (DA) method, where wavelength of a DFB laser is swept over an absorption line of molecular oxygen. Regular silicon-based camera is used to measure the 2D absorption spectrum of the target scene from which the 3D image is acquired. The experimental setup and data processing strategy are presented in section 2. In section 3 we present indoor tests of the performance of the 3D camera and conclusions are drawn in section 4.

2. Methods

The concept of the 3D camera is shown in Fig. 1(a). The injection current of the DFB laser (Photodigm PH760DBR), with manufacturer specified linewidth of less than 10 Mhz, is controlled with a laser diode driver (Wavelength electronics LDTC0520). The driver includes a temperature controller that was used to stabilize the laser temperature to a fixed setting. The average output power of the laser used in the test was 30 mW. The current is set by using a microcontroller (Arduino Nano 33 IoT) and a 12-bit analog-to-digital converter. The CMOS camera used in this work was FLIR BFS-U3-27S5 M, binned to a resolution of 484 × 366 pixels. The microcontroller receives digital signals from the CMOS camera at the beginning of each frame. At the falling edge of the digital signal, when the camera reads the integrated charges, the microcontroller adjusts the laser current, and thus also the emitted wavelength, to the next pre-determined value. Due to lack of any collimation optics, the laser output is divergent, approximately 26 degrees and 6 degrees FWHM for the two axes. An objective lens (Tamron A032E) is used in front of the CMOS camera for imaging the targets within the illuminated area. An optical band pass filter (Thorlabs FB760-10) with a center wavelength of 760 nm and full width-half-maximum of 10 nm is used to filter ambient light.

 figure: Fig. 1.

Fig. 1. a) Block diagram of the 3D camera prototype. uC = microcontroller, LDD = laser diode driver, DFB = distributed feedback laser, BPF = band pass filter, L = lens, CMOS = complementary metal-oxide-semiconductor imager, B1 = diverging laser output beam, B2 = light reflected from an object. The light beam is attenuated by molecular oxygen. b) Figure of the prototype in aluminum enclosure, showing the laser output and the camera objective.

Download Full Size | PDF

The output wavelength of the laser is set to cover a single absorption line of molecular oxygen around 761 nm. The atmospheric attenuation was simulated by using the HITRAN database [15]. The oxygen absorption spectrum between 758 nm to 771 nm, simulated for 10 m propagation (5 m distance to target) at atmospheric conditions is shown in Fig. 2(a). There are several suitable lines that can be used for ranging. The absorption line centered around 760.77 nm was used in this work, as this was covered by the laser used operating around room temperature and was one of the strongest absorbing transitions. The simulated absorption spectra for two ranges are shown in Fig. 2(b). The simulated absorption is plotted for two ranges, 1 m and 5 m, to illustrate the increase in signal attenuation. The oxygen transition around 760.77 nm and neighboring lines are free from absorption by other molecules such as ambient carbon dioxide and water vapor. The absorption of water vapor at atmospheric concentrations around 761 nm is around four orders of magnitude lower than of oxygen and no carbon dioxide absorption lines exist in this region. The use of the strongest absorption line of oxygen is optimal at short to medium ranges (0-10 m), however, with long distances the absorption starts to approach full absorption, which makes extracting the absorbance and thus distance inaccurate. Based on simulations, for the selected line this sets the maximum distance for ranging to around 75 m, which is well above the ranges used in this work. While this work focuses on short to medium range detection, longer distances could be covered by using one of the weaker absorption lines, given that enough illumination power is available for the laser. Using oxygen absorption signal for ranging assumes constant concentration of molecular oxygen in the air. There is around 20.9% of oxygen in ambient air and based on simulations, it requires approximately 0.2% change in oxygen concentration to produce an error of 1% in ranging. Environments with larger deviations or different concentrations would require frequent calibration to known distances for accurate oxygen measurement.

 figure: Fig. 2.

Fig. 2. a) Simulated oxygen absorption spectra around 761 nm wavelength at 5 m distance (10 m total propagation). b) Simulated absorption spectra of the absorption line around 760.77 nm for 1 and 5 m distances (2 and 10 m propagation), respectively.

Download Full Size | PDF

The simplest way to use the signal absorption to produce a 3D image is to illuminate the scene two different narrowband wavelengths, one on top of the absorption line to produce maximum absorption of the return signal and one in the side of the absorption to result in minimum absorption. The ratio of the illuminated intensity captured with the CMOS imaging sensor with these two wavelengths will correlate with distance. The issue with this method is the drift and jitter of the laser wavelength, which will turn in to large error in distance measurements. An external wavelength reference would be required to set the laser illumination to precisely the same wavelengths. In contrast, by illuminating the target area with multiple wavelength points over the whole absorption profile and thus recording the absorption spectrum, this issue can be avoided. The measured absorption spectrum of each pixel can be fitted with a theoretical model that represents the absorption profile. From this model it is possible to measure the absorbance, and thus the range, more accurately, immune to slight drifts in the wavelength. Any long-term drifts in the laser wavelengths can easily be compensated by tuning the laser current offset, based on the center absorption position.

During acquisition of a single 3D frame, the current and thus emitted wavelengths are swept over absorption profile of molecular oxygen with pre-determined number of current values spaced equally apart. At the beginning of the acquisition of each 3D frame, the laser current is brought down below lasing threshold to capture a dark frame illuminated by only ambient light passing through the band pass filter. This frame is subtracted from the illuminated frames that follow. In this work, a total of 60 illuminated frames constituted a single data-cube that resulted in a single 3D frame. The data is acquired and read to a PC through USB 3.0 from the camera. An illustration of the data cube is shown in Fig. 3(a). An example 60-point spectrum extracted from the center of the data cube is shown in Fig. 3(b). Sweeping the laser wavelength with injection current tuning increases both output power and wavelength of the emission. Therefore, the detected absorption spectrum has a linearly increasing component. In this work, the frame rate of the camera was set to 45 Hz, which resulted in 0.74 Hz frame rate for the 3D data cubes and thus sets the 3D image frame rate. The integration time for each individual frame was 15 ms.

 figure: Fig. 3.

Fig. 3. a) An example of a recorded data cube. B) An example absorption signal extracted from a single pixel of the data cube consisting of 60 illuminated frames. Blue dots are normalized pixel intensity data, the red line is the result from the fitting algorithm and the red dots are the residual (data – fit).

Download Full Size | PDF

To create the 3D frame from the data cube, two-step processes are applied with the developed Labview software. Firstly, an average of 16 pixels from the center of data cube is selected for the initial fit. The spectrum from this pixel is fitted with a Lorentzian line shape. Typically, a Voigt profile is used to represent the absorption profile, however, at atmospheric pressures the pressure broadening resulting in Lorentzian line shape dominates the temperature broadening resulting in Doppler line shape. Due to data processing speed, the data was approximated with Lorentzian line shape that is written as:

$$\boldsymbol{L}({\boldsymbol{\mathrm{\lambda}}})= \frac{\mathbf{a}}{{1 + {{\left( {\frac{{{\boldsymbol{\mathrm{\lambda}}}-{\boldsymbol{\mathrm{\lambda}}}0}}{{0.5\boldsymbol{w}}}} \right)}^2}}}$$
where L is the Lorentzian function, a is the peak absorption at the line center, w is the full width at half-maximum, λ is the wavelength and λ0 is the wavelength at peak absorption. The laser power increase as a function of current is modelled with a linear function added to the absorption profile. The data is fitted to the combined function with an iterative non-linear Levenberg-Marquardt (LM) algorithm. The center wavelength, width and amplitude are set as free parameters and are extracted from the fit of the given pixel of the camera. The non-linear fit gives the best estimate for the parameters but was too time-consuming to be used for every illuminated pixel in the image without parallel processing. Therefore, in this work to achieve real-time operation, only a single spectrum was fitted with this method from each data-cube. For the rest of the pixels, a line profile with fixed center wavelength and width was used. The center wavelength and width were used from the fitted spectrum. The absorption from the rest of the pixels were extracted by least-squares method. The use of the least squares method assumes that the line shape is independent of the total intensity of the signal and the absorption depth, which are different for different pixels due to the Gaussian illumination profile that results in different illumination power for different pixels and for the different distances each pixel was measuring. In practice, the line shape depends on the absorption through Beer-Lambert non-linearity with large absorption depths. Furthermore, any non-linearity in the camera response can affect the line shape through difference in total signal intensity. It is noted that several approximations are taken here, and the goal is not to extract an accurate absorbance reading, rather we focus on extracting a stable and precise estimation of the absorbance, which can then be corrected by using known distances. The peak absorbance is related to peak absorption and further to the distance through Beer-Lambert law
$$- \mathbf{log}({1-\boldsymbol{a}} )= \boldsymbol{A} = \boldsymbol{\varepsilon dc}$$
where A is the absorbance, a is the absorption, ε is the molar absorptivity, d is the optical path length and c is the concentration of the attenuating species, in this case molecular oxygen. Assuming c as a constant, using the known values for ε [15] and measuring A, d can be calculated for each pixel, which in this case is the range to the target times two. However, due to the approximations explained earlier, much more accurate ranging can be achieved when the measured absorptions are used to calibrate to points with known distance, as will be shown in the next section.

3. Results

The accuracy and noise of the ranging of the 3D camera were determined in the indoor test area. Four cardboard boxes 85 cm height and 60 cm wide were used as test targets and were placed between 4 m to 10 m distances as shown in Fig. 4(a). The boxes were tilted with respect to the 3D camera to prevent mirror reflection. The distances from the camera to the targets were measured using a commercial range finder (Extech Instruments DT40 m), with a reported accuracy of ±0.2 mm. The test targets were imaged using the developed 3D camera prototype. The measured 3D image and an RGB image of the test area is shown in Fig. 4(a). The illuminated area was elliptical due to differences in the divergence of the two axes from the laser. The parts having low signal value, the floor that was at a high angle and edges, were discarded and the distance was set to 0 m. This resulted in total of 9183 pixels that were fitted. The 3D image shown in Fig. 4(a) is formed by binning 9 pixels and taking an average of 10 consecutive 3D frames. The ranges extracted from the 3D frame were compared to the actual ranges measured with the commercial range finder. The correlation plot is shown in Fig. 4(b), where the measured and actual ranges are taken around the areas shown as grey dots in the RGB image of Fig. 4(a). The measured ranges of the x-axis are based on pixel absorption and the actual ranges measured with the rangefinder are shown in y-axis. The calculated ranges were within 0.3 m without any corrections, by using the measured absorption, known absorption strength [15] and assuming 20.9% oxygen concentration. However, if a second order polynomial correction curve is fitted in the data, the mean absolute error was reduced to 3.4 cm. The correction curve is plotted as black curve in Fig. 4(b). and the residual plots as red dots below the image.

 figure: Fig. 4.

Fig. 4. a) RGB image of the test area with selected test points marked as grey circles and a heat map plot of the 3D image. The RGB image was taken with a separate camera. The target with the largest distance was close to a wall at a similar distance and thus is not shown as rectangle. b) Correlation plot of the calculated distance vs reference distance measurement with 2nd order polynomial fit and residual. Sigma denotes the mean absolute error of the measured signal after fitting.

Download Full Size | PDF

The noise and stability of the range measurement was studied by using Allan deviation analysis [16], which is widely used in gas spectroscopy to study the noise and stability of a concentration measurement. For the analysis, the image acquisition continued for 400 s to capture a total of 300 3D frames. The Allan plot for the furthest distance of 10 m is shown in Fig. 5(a). Typically the Allan plot is shown for averaging time, however, in this case it is shown as averaged 3D frames, taken at 0.73 Hz frame rate. The dashed line indicates behavior of fully uncorrelated noise, in which the noise decreases $1/\sqrt N $, where N is the number of measurements averaged. From the data we can see that the noise in the range measurement was uncorrelated and by averaging the 3D frames, precision below cm was achieved. The length of the time series was not, however, long enough to show longer term stability. The allan data shown for the longest range was consistent with other ranges plotted as time-series in Fig. 5(b). The distance noise is consistent with different ranges. While the signal-to-noise ratio of the return signal is higher for shorter distances, this is compensated by larger absorption signal for longer distances. In this example, we used 9 pixel binning for the final results. By pixel binning, similar reduction of noise was observed, the noise reduced by a factor of 3. Therefore, the single pixel noise values were around three times of the binned values. Similar behavior is observed when studying the noise of the range measurement with different intensity values. The intensity map of the 3D image of Fig. 4(a). is shown in Fig. 6(a). As previously stated, prior to fitting, the intensity values were cut off with a fixed cutoff value (850 counts). The range noise of each pixel during the 300 frame measurement set is plotted against the mean intensity value in counts in Fig. 6(b). Reduction of noise as $1/\sqrt N \; $ is observed, where N is the intensity as counts for a given pixel.

 figure: Fig. 5.

Fig. 5. a) Allan deviation analysis for a binned pixel. Range to target was 10 m. Total of 300 3D frames were recorded during with 400 s acquisition time. b) shows the time series of the ranges for the four test targets. The range values were binned from 9 pixels around the target area. Sigma denotes standard deviations of the range values.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. a) Intensity map of the 3D image of Fig. 4(a). b) Scatterplot of the range noise. y-axis is the single pixel standard deviation during the 300 3D frames. x-axis is the signal intensity as counts.

Download Full Size | PDF

4. Conclusions

A 3D camera based on the absorption of laser illumination by atmospheric oxygen was demonstrated. The prototype was demonstrated for imaging objects between 4 m to 10 m distance. The approach has the potential to result in very compact and low-cost solutions to complement stereo cameras and lidar based solutions. In this work we demonstrated a prototype capable of 3D imaging with 0.73 Hz frame rate in real-time operation. The real-time operation was achieved using two-stage fitting of the absorption spectra in each pixel. In this work a single case scenario was studied. The performance values of a 3D camera based on the principles in this work depends on multiple factors, such as to how many pixels the light is divided to, maximum detection range and reflectance of the targets. The imaging sensors sensitivity, linearity and dynamic range will also have an impact on the performance values. In this example, the laser was diverging to the target area, providing uneven intensity distribution at target. Different methods can be used to distribute the light more evenly across the field-of-view.

While the frame rate in this work is quite modest, there are no fundamental limitations in increasing the frame rate close to that of stereo vision cameras (10-20 Hz). This would, however, require much more efficient data processing than is possible with the setup used in this work. Most importantly, the fitting routines should be implemented on an application-specific integrated circuit (ASIC) or and field-programmable-gate-array (FPGA) for real-time operation. The authors speculate that the simplest way to make the frame-rate faster is to reduce the amount of points per oxygen spectrum. As an example, measuring 6 points instead of 60 would alone increase the frame-rate close to 10 Hz. This will likely increase the noise in the spectral fitting, however, the relationship between noise and the number of points in a spectrum was not studied in this work.

This work focuses on atmospheric oxygen, other molecules could be used with a similar approach, such as water vapor and carbon dioxide. The attractiveness of the use of molecular oxygen is the stable concentration and the ability to use low-cost silicon-based imaging sensors. While this study provided results in indoor conditions, several questions remain, such as the scalability of the speed by using more integrated readout and fitting as well as immunity to outdoor ambient light. Further interesting research topics include different modulation methods to improve operation in high ambient light conditions. The approach may also find use by combining it with existing camera-based systems, such as stereo cameras or machine vision cameras.

Funding

Photonics Research and Innovation; Academy of Finland (320168).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. Okoshi, Three-dimensional imaging techniques, (Elsevier, 2012).

2. R. Horaud, M. Hansard, G. Evangelidis, et al., “An overview of depth cameras and range scanners based on time-of-flight technologies,” Vision Appl. 27(7), 1005–1020 (2016). [CrossRef]  

3. L. Keselman, J. Iselin Woodfill, A. Grunnet-Jepsen, et al., “Intel realsense stereoscopic depth cameras,” Proc. CVPR IEEE, 1–10 (2017).

4. V. H. Hasson and R. D. Christopher, “Passive ranging through the Earth's atmosphere,” Proc. SPIE 4538, 49–56 (2002). [CrossRef]  

5. M. R. Hawks and G. P. Perram, “Passive ranging of emissive targets using atmospheric oxygen absorption lines,” Proc. SPIE 5811, 112–122 (2005). [CrossRef]  

6. M. R. Hawks and G. P. Perram, “Passive ranging of boost-phase missiles,” Proc. SPIE 6569, 145–155 (2007). [CrossRef]  

7. M. R. Hawks, R. A. Vincent, J. Martin, et al., “Short-Range Demonstrations of Monocular Passive Ranging Using O 2 (X 3 Σ g-→ b 1 Σ g+) Absorption Spectra,” Appl. Spectrosc. 67(5), 513–519 (2013). [CrossRef]  

8. Z. Yu, B. Liu, H. Yu, et al., “Measurement and analysis of sky background spectra in passive ranging,” Proc SPIE 9677, 47–51 (2015). [CrossRef]  

9. H. Yu, B. Liu, Z. Yan, et al., “Passive ranging using a filter-based non-imaging method based on oxygen absorption,” Appl. Opt. 56(28), 7803–7807 (2017). [CrossRef]  

10. H. Yu, B. Liu, Y. Zhang, et al., “Three-channel filter-based non-imaging passive ranging system based on oxygen absorption,” Opt. Laser Technol. 111, 797–801 (2019). [CrossRef]  

11. D. J. Macdonald, M. R. Hawks, and K. C. Gross, “Passive ranging using mid-wavelength infrared atmospheric attenuation,” Proc. SPIE 7660, 1269–1277 (2010). [CrossRef]  

12. R. J. Smith, “Differential absorption laser ranging at the oxygen A-band,” Proc. SPIE 1219, 525–531 (1990). [CrossRef]  

13. P. Siozos, G. Psyllakis, and M. Velegrakis, “A continuous-wave, lidar sensor based on water vapour absorption lines at 1.52 µm,” Remote Sens. Lett. 13(11), 1164–1172 (2022). [CrossRef]  

14. P. Siozos, G. Psyllakis, P.C. Samartzis, et al., “Autonomous Differential Absorption Laser Device for Remote Sensing of Atmospheric Greenhouse Gases,” Remote Sens. 14(3), 460 (2022). [CrossRef]  

15. I. E. Gordon, L. S. Rothman, C. Hill, et al., “The HITRAN2020 molecular spectroscopic database,” J. Quant. Spectrosc. Radiat. Transfer 203, 3–69 (2017). [CrossRef]  

16. P. O. Werle, R. Mücke, and F. Slemr, “The limits of signal averaging in atmospheric trace-gas monitoring by tunable diode-laser absorption spectroscopy (TDLAS),” Appl. Phys. B 57(2), 131–139 (1993). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. a) Block diagram of the 3D camera prototype. uC = microcontroller, LDD = laser diode driver, DFB = distributed feedback laser, BPF = band pass filter, L = lens, CMOS = complementary metal-oxide-semiconductor imager, B1 = diverging laser output beam, B2 = light reflected from an object. The light beam is attenuated by molecular oxygen. b) Figure of the prototype in aluminum enclosure, showing the laser output and the camera objective.
Fig. 2.
Fig. 2. a) Simulated oxygen absorption spectra around 761 nm wavelength at 5 m distance (10 m total propagation). b) Simulated absorption spectra of the absorption line around 760.77 nm for 1 and 5 m distances (2 and 10 m propagation), respectively.
Fig. 3.
Fig. 3. a) An example of a recorded data cube. B) An example absorption signal extracted from a single pixel of the data cube consisting of 60 illuminated frames. Blue dots are normalized pixel intensity data, the red line is the result from the fitting algorithm and the red dots are the residual (data – fit).
Fig. 4.
Fig. 4. a) RGB image of the test area with selected test points marked as grey circles and a heat map plot of the 3D image. The RGB image was taken with a separate camera. The target with the largest distance was close to a wall at a similar distance and thus is not shown as rectangle. b) Correlation plot of the calculated distance vs reference distance measurement with 2nd order polynomial fit and residual. Sigma denotes the mean absolute error of the measured signal after fitting.
Fig. 5.
Fig. 5. a) Allan deviation analysis for a binned pixel. Range to target was 10 m. Total of 300 3D frames were recorded during with 400 s acquisition time. b) shows the time series of the ranges for the four test targets. The range values were binned from 9 pixels around the target area. Sigma denotes standard deviations of the range values.
Fig. 6.
Fig. 6. a) Intensity map of the 3D image of Fig. 4(a). b) Scatterplot of the range noise. y-axis is the single pixel standard deviation during the 300 3D frames. x-axis is the signal intensity as counts.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

L ( λ ) = a 1 + ( λ λ 0 0.5 w ) 2
l o g ( 1 a ) = A = ε d c
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.