Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast HDR image generation method from a single snapshot image based on frequency division multiplexing technology

Open Access Open Access

Abstract

Traditional high dynamic range (HDR) image generation algorithms such as multi-exposure fusion need to capture multiple images for algorithm fusion, which is not only slow but also occupies a lot of storage space, which limits the application of multi-exposure fusion technology. In this paper, the frequency division multiplexing method is used to separate the sub-images with different exposure values from a single snapshot image successfully. The resolution of HDR images generated by this method is almost the same as that of the traditional multiple exposure methods, the storage space is greatly reduced and the imaging speed is improved.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Dynamic range is one of the important performance parameters for optical imaging equipment, which can accurately denote the brightness and contrast information of natural scenes. However, in some CCD or CMOS image sensors, their imaging dynamic range is basically fixed, so when capturing scenes that include drastic lighting conditions, the potential wells of image sensor may be easily filled by the stronger light. This situation is not conducive to image detection and recognition in computer vision. Consequently, the problem of high dynamic range (HDR) imaging has drawn much attention from researchers, and opened a new frontier in research and industry [15].

The difficulties of HDR generation mainly include fast, real-time and high-resolution imaging. Currently, there are many methods to generate high dynamic range images based on low dynamic range, which are mainly divided into two solutions: hardware improvement and software fusion. From the hardware system, some researchers tried to create intelligent pixels by designing circuit structure on image sensors [68]. But in general, such electronic sensors require complex fabrication processes and high manufacturing costs. In recent years, researchers are more likely to use methods that mainly rely on spatial light modulators (SLMs), such as liquid crystal displays (LCD) and liquid crystals on silicon (LCoS) [9,10]. However, in order to generate HDR images in real time and high speed, many researchers choose a Digital Micromirror Device (DMD) to perform HDR imaging. Note that the light efficiency of the DMD can reach 70% and its high refresh rate have reached to 22kHz compared with the liquid crystal modulator. Feng et al. [11]achieved per-pixel coded exposure, and proposed an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover HDR images. At the same time, using the high-speed modulation characteristics of DMD, researchers have also realized a high-speed imaging method based on time-domain multiplexing [12]. However, the HDR imaging method using DMD has a certain decrease in imaging resolution due to the need to iteratively generate masks, because spatial resolution is traded off for temporal resolution. To the authors’ knowledge, the multi-exposure method is commonly used to recover high-reflectance maps which has a very high fusion resolution [13]. Goshtasby et al. [14]determined the amount of information contained in the segmented image blocks according to the size of information entropy to determine the fused image blocks. Finally, the fused HDR image was obtained by smoothing the edges of the image blocks. Mertens et al. [15]took image quality evaluation indexes such as saturation and contrast as the basis for determining image sequence fusion, and fused them under the pyramid model with multiple resolutions, and achieved results comparable to the tonal mapping method. Liu and Wang et al. [16]proposed a multi-exposure fusion algorithm based on Dense scale invariant feature transform (Dense SIFT) to describe the activity level of local details of the source image. At the same time, for dynamic scenes with targets, it can effectively remove the residual ghosts. However, these techniques require multiple photographs from static scenes with various exposure times and take up a lot of internal storage space. But how to keep the advantages of this method and avoid its disadvantages?

Pixel-wise exposure control which is used for high-speed imaging by extracting sub-exposure images from a single capture in the frequency domain can be combined nicely with this method, and this can be implemented using regular image sensors with the help of additional optical modulators. In [17], an imaging technique of frequency division multiplexing based on LCD is proposed at the earliest, and the preliminary verification of this method is carried out. In [18], through pixel-wise optical coding of images during exposure time extracted sub-exposure images from a single capture successfully using LCoS SLM, however, the image resolution in this paper is low and limited by a weakness in speed of LCD and LCoS, so this method cannot be applied to the field of high-speed imaging. In the field of frequency division multiplexing high-speed imaging, Gragston et al. [19] used the time-multiplexed structured detection to achieve the high-speed flame chemiluminescence imaging based on DMD, this team also have applied the Multiplexed Structured Image Capture (MUSIC) technique to demonstrate imaging of a laser-induced plasma [20] and demonstrated that two images with complementary spatial phases can be used to enhance the lost spatial resolutions which caused by limitations of the low-pass spatial frequency filtering depth [21]. And the review article by N. Shaked et al. also gives us a great guide to our experiments in terms of field of view multiplexing and depth-of-field Multiplexing [22]. In this paper, this method is applied to the field of fast multi-exposure fusion imaging, and the sub-image information of different exposure time is successfully separated from a single snapshot image and fused, which overcomes the shortcoming of the traditional multi-exposure fusion algorithm which occupies a large amount of internal storage space and is slow in capture speed.

In Section 2, we present the principle of optical modulation and simulated analysis is given. In section 3, we introduce the experiment setup and evaluation of the system performance. In section 4, we provide experimental results about separating and fusing images with different exposure times from a single snapshot image.

2. Optical modulation through Fourier-based method

This section introduces the modulation method of the image Fourier transform domain through a specific modulation function convolving with sub-images. First, we would like to introduce the frequency modulation principle of the two-dimensional image Fourier domain; Then the modulation mask is designed for simulation analysis.

2.1 Fourier domain modulation coding

Frequency division multiplexing technology is generally used for one-dimensional signal transmission in the field of communication to improve the utilization of the frequency band. Each signal carries out spectrum shift through its own modulator, and combines them into multiplexed signal transmission by the circuit adder and retains a certain frequency range for protection, and then the receiver is distinguished by the corresponding band-pass filter. The frequency division multiplexing in image domain is consistent with the one-dimensional signal mathematically. To realize the frequency domain shift of the light intensity within a specific exposure period, we need some special modulation masks to convolve the frequency domain of images. Based on frequency domain characteristics of common functions, sine and square-wave function are chosen to apply this operation frequently, because they both have a large higher-order component when f equals to 1/T, thus more image information can be distributed to different sub-frequency bands. Figure 1 shows the Fourier domain of the applied functions.

 figure: Fig. 1.

Fig. 1. Fourier transform of sin and square functions. (a) Square function. (c) Sin function. (b) and (d) represent the Fourier transform of (a) and (b) respectively.

Download Full Size | PDF

Extend this method into two dimensions, modulation masks of sine and square function which we selected can be expressed as (taking x axis as an example):

$${M_1}(x,y) = a + b\cos (2\pi {u_0}x)$$
$${M_2}(x,y) = {M_2}(x + T,y) = \left\{ \begin{array}{ll} 1&|{x(y)} |< {T_1}\\ 0&{T_1} < |{x(y)} |< {T_1}/2 \end{array} \right.$$
where (x, y) is the coordinate system of the spatial light modulator, u0 is the frequency of the trig function, and T means the period of the square wave. Taking the existence of frequency band for protection in the image domain into consideration, the coordinate position of frequency shift center should be as far away from the origin in the frequency domain as possible, to ensure that the recovered sub-images is less disturbed by the other information. The position of the frequency shift center is determined by the sub-peak coordinate in the frequency domain of the modulation mask. Since the pixel points of the camera are discrete, the modulation length of the square function is longer than that of the sine, so we choose the square mask for simulation analysis and experimental verification in this paper. The principle of image frequency division multiplexing is given by
$$I(x,y) = \sum\limits_{i = 1}^n {{\alpha _i}{M_i}} (x,y)\int_{{t_{i - 1}}}^{{t_i}} {{I_i}(x,y,t)} dt\textrm{ = }\sum\limits_{i = 1}^n {{\alpha _i}{M_i}} (x,y){I_i}(x,y)$$
where ${I_i}(x,y)$ is the image information collected within the sub-exposure period of the camera, ${M_i}(x,y)$ is the modulation masks corresponding to the n sub-exposure periods, ${\alpha _i}$ is the weight coefficient corresponding to each sub-image, and $I(x,y)$ is the fusion image obtained within this single exposure period. Take the square mask as an example, the Fourier transform of mathematical form for each sub-exposure time can be given by
$${\cal F}[{I{}_i(x,y) \cdot {M_i}(x,y)} ]= \widetilde I{}_i(u,v) \ast \sum\limits_{n = \textrm{ - }\infty }^{\textrm{ + }\infty } {\frac{{2\sin (n{u_0}{T_1})}}{n}} \cdot \delta (u - n{u_0})$$
where $\widetilde I{}_i(u,v)$ represents the Fourier formalism, the Fourier domain of the modulation function is a discrete sinc function, and the u0 is the coordinate after the frequency shift. If the modulation template is only a function of the x variable, then u0 is the x-coordinate of the frequency shift center in the Fourier domain of the image. In the last step, we use the bandpass filter to extract the light intensity information at different frequency shift centers, and recover the sub-image data through the inverse Fourier transform.

As shown in Fig. 2, the input of the multi-exposure method are images with different exposure time. We put the sub-images with different gray values (the average gray value: ${I_1}\lt {I_2}\lt {I_3}\lt {I_4}\lt \ldots \ldots \lt {I_n}$) separately at each sub-exposure period to multiply with different modulation masks, and obtain the single exposure fusion image in frequency domain. Then the bandpass filtering is carried out on the fusion image to recover sub-images with different exposure time. At last, multi-exposure fusion algorithm is used to generate a HDR image from the recovered sub-images.

 figure: Fig. 2.

Fig. 2. Frequency division multiplexed imaging of multi-exposure fusion method

Download Full Size | PDF

2.2 Fusion and separation in the frequency domain

We use the pictures of plaster sculptures from different positions taken in the lab as input for simulation analysis, and use the square masks in four directions as the modulation function to simulate the effect of the spatial light modulator. In the simulation process, the modulation template adopts 0 and 1 white and black fringes with a period of T=4 pixels and frequency f=512Hz. The reason why the fringe patterns with the maximum frequency is to ensure a certain protective bandwidth and reduce the interference between sub-images.

Along the x axis, the horizontal fringe masks rotate respectively 0°, 45°, 90°, 135° to generate four directions mask patterns so that in the frequency domain sub-images can be separated. As shown in Fig. 3, Mi is the modulation mask in four directions, Ii is original sub-images at four different time in the whole exposure period. The weight ratio of each input image in the entire exposure period is $\alpha \textrm{ = }1/n\textrm{ = }1/4$, circles of different colors in the frequency domain is the bandpass filter at different positions, the radius of the bandpass filter is $r = 0.5 \ast f$, and f is the frequency of the modulation templates. Finally, we verified the possibility of this method to separate sub-images with different exposures in the frequency domain.

 figure: Fig. 3.

Fig. 3. Process of frequency domain multiplexing and separation. Red, yellow, green, and blue circles represent different positions of bandpass filters, the circles having the same color represent the sub-images extracted in the corresponding frequency band.

Download Full Size | PDF

3. Experiment setup and evaluation

Figure 4 illustrates the multi-exposure HDR imaging system based on frequency division multiplexing principle. DMD is the core component of the imaging system, still there are some researchers use LCoS as a spatial light modulator because it is more convenient to match with the sensor. But DMD have a higher modulation speed as a fast optical switch. This system mainly consists of CMOS sensors, total internal reflection (TIR) prism, bidirectional telecentric lenses, relaying lenses, industrial lens sets, and a digital micromirror device. The SLM used in the experiment is a 0.65 inches DMD made by Texas Instruments Company (TI), and the model of the Chipset is DLP6500 & DLPC910. The DMD is placed just behind the TIR prism, which can be used to implement programmable imaging by importing different coded patterns. When the micromirrors are set to the on state, the reflected light (green arrow) passes through the prism, and bidirectional telecentric lens can be imaged at the corresponding CMOS pixel. When the micromirrors are set to the off state, the incident light is modulated and rejected by the CMOS sensor.

 figure: Fig. 4.

Fig. 4. Optical system prototype and multi-exposure HDR imaging system

Download Full Size | PDF

System matching is necessary for modulation accuracy. System matching refers to the process of aligning the spatial position relationship, which depends on the accuracy of the six-axis precision displacement table and moire fringe produced by the gaps between pixels and micromirrors [23]. Besides, we also conducted the system mapping. System mapping represents the calculation of the transformation relation of the coordinate system matrix, indicating that the captured pixel coordinates of the camera can be converted to the coordinates of the digital micromirror of the DMD. There are two methods for calculating the transformation matrix mainly by polynomial fitting: least squares method and gradient descent method. In the experiment, regardless of the method used to fit the pixel level, the error is always within 0.5 pixels; the specific content is not described in detail here.

Before conducting experiments, we need to evaluate the imaging performance of the system, including the imaging resolution and definition of the recovered sub-images. The USAF1951 optical test pattern is selected as the standard of imaging resolution. The resolution is about 62.5μm in the unmodulated USAF1951 optical test pattern image by checking up the table (corresponds to the first element of the third group in the USAF1951 table). When evaluating the definition of the recovered sub-images, we compare it with the corresponding original image. Among them, PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) [24] are widely used as two common objective evaluation criterions of images. PSNR is used to measure the reconstruction quality of lossy transformation, the larger the better, and is generally expressed as

$$MSE = \frac{1}{{mn}}\sum\limits_{i = 0}^{m - 1} {\sum\limits_{j = 0}^{n - 1} {{{[I(i,j) - {I_K}(i,j)]}^2}} } $$
$$PSNR = 20 \cdot {\log _{10}}(\frac{{MA{X_I}}}{{\sqrt {MSE} }})$$

MSE is the mean square error of the image, I and IK are the original image and the recovered sub-image respectively. SSIM is a relatively independent subjective measurement based on brightness, contrast and structure, which is used to measure the structural similarity between images. It is more consistent with the judgment of human eyes on image quality, which is generally expressed as

$$SSIM(x,y) = \frac{{(2{\mu _x}{\mu _y} + {c_1})(2{\sigma _{xy}} + {c_2})}}{{({\mu _x}^2 + {\mu _y}^2 + {c_1})({\sigma _x}^2\textrm{ + }{\sigma _y}^2 + {c_2})}}$$
where μ is the mean value of x and y, $\sigma $ represent the covariance, and c is the constant. We took the PSNR and SSIM values before and after Fourier filtering of the USAF1951 original images as reference, and tested the reconstruction performance of bandpass filters with different radii, as shown in the Table 1.

Tables Icon

Table 1. Quality Comparison of USAF1951 Using Different Bandpass Filter Based on PSNR and SSIM

The performance of Butterworth filter is obviously better than that of other filters in both PSNR and SSIM values. To make the contrast even clearer, we added the line profile in Fig. 5. f in the table is the fringe frequency of the modulation mask, and the multiplication factor is between 0.1∼1. For static images, the larger the radius of the low-pass filter is, the higher the quality of the recovered image will be. In order to avoid interference from sub-images, we generally choose the factor as 0.5. It can be seen from the table that the filtered images also have high PSNR and SSIM values at this time.

 figure: Fig. 5.

Fig. 5. Quality Comparison of USAF1951 Using Different Bandpass Filter Based on PSNR and SSIM

Download Full Size | PDF

As we can be seen from Table 2, the quality of sub-images separated from the sidebands using frequency division multiplexing method decreases to a certain extent compared with Fourier filtering of the original image directly, but the decrease is not obvious. To make the contrast even clearer, we added the line profile in Fig. 6. According to the experience value, when PSNR value is between 30∼40dB, it indicates that the image has a higher similarity to the original image, so that in this paper we choose Butterworth filter and the corresponding filter radius of $0.5 \ast f$ in the following experiment.

 figure: Fig. 6.

Fig. 6. Quality Comparison of USAF1951 Using Different Bandpass Filter Based on PSNR and SSIM

Download Full Size | PDF

Tables Icon

Table 2. Quality Comparison of Original USAF1951 Images and Recovered USAF1951 Images Using FDM method Based on PSNR and SSIM

Figure 7 shows an image of the USAF195 optical test pattern in the experimental modulation process, and the frequency domain of this image is modulated from four directions. Due to the use of static images, the recovered image information from the four directions of sidebands is the same except for the fringe, as shown in Fig. 7(c), it is corresponding to the sub-image of yellow band-pass filter.

 figure: Fig. 7.

Fig. 7. Recovery of modulated USAF1951 optical test pattern using Fourier filter. (a) Modulated test pattern, (b) Fourier domain, passband of filter is $0.5 \ast f\textrm{ = }135Hz$, (c) recovered image from yellow passband filter, (d) recovered image from (c).

Download Full Size | PDF

4. Results and discussion

The premise of multi-exposure fusion algorithm is to obtain input images with different exposure times. In this paper, the image information of different exposure times is fused in the frequency domain by frequency division multiplexing technology, and then the sub-image information is extracted by corresponding band-pass filter. Finally, sub-images of multiple exposure is directly fused by software algorithm. The multi-exposure fusion algorithm proposed by Mertens T et al. [15]which uses image quality evaluation indexes such as saturation and contrast as the basis of image sequence fusion, is suitable for image fusion of gray images with different exposure intensities. In this paper, this algorithm is used to fuse the recovered sub-images with different exposure times to generate HDR images. The first experimental object is a USAF1951 optical test pattern with exposure time of 16ms, 12ms, 8ms and 4ms. When the exposure time is 16ms and 12ms, part of the recovered sub-image is overexposed, and when the exposure time is 8ms and 4ms, part of the recovered sub-image is too dark.

Figure 8 shows the whole frequency division multiplex-based multi-exposure HDR fusion method. In Fig. 8(a) generated by a single exposure period of the camera, four images with different exposure times of 16ms, 12ms, 8ms and 4ms are fused. Figure 8(b) is the information in the frequency domain. It can be clearly seen from the figure that intensity values in bandpass filters of different colors are different, which represents that the brightness of images with different exposure times changes from high to low. It can also be clearly seen from the three-dimensional images in the frequency domain in Fig. 8(c) that the peak value in the frequency band corresponding to 16ms exposure time is larger than other peaks. Through the peak extraction algorithm, the peak values of the four sub-images in the frequency domain are respectively 11.57, 10.58, 9.59 and 8.82. Figure 8(d)-(g) are the sub-images recovered from different exposure times. Through local area amplification, it can be found that the image changes from over-exposure to over-darkness. Compared with the HDR image (i) obtained by fusion after four shots by the camera, PSNR of the two can reach to 31.12dB, indicating that the effect of the one-shot HDR image generated by this method is very close to that of the traditional ones.

 figure: Fig. 8.

Fig. 8. Multi-exposure HDR method using frequency division multiplexed imaging. (a) Modulated USAF1951 test pattern using four different exposure times, (b) Fourier domain, E refers to the total energy within the bandpass filter of different colors, (c) three-dimensional view of the amplitude in the frequency domain, (d)-(g) recovered sub-images with exposure time of 16 ms, 12 ms, 8 ms, 4 ms, (h) HDR image using sub-images, (i) HDR image using original ones.

Download Full Size | PDF

We also conducted experiments based on this method on coins and metal defect plates. As can be seen from Fig. 9, we performed HDR fusion on the sub-images extracted by sidebands of the two frequency domains, and obtained a nice image resolution. After calculation, the PSNR between the two images and the corresponding original HDR images are 30.12dB and 32.56dB respectively. It is indicated that the resolution of HDR images generated by a single snapshot is basically close to that of HDR images synthesized by traditional multiple exposure methods, and as a new method for generating multi-exposure HDR images, it has certain advantages in terms of simplicity and speed.

 figure: Fig. 9.

Fig. 9. Multi-exposure HDR method using coin and defective plate based on frequency division multiplexed imaging.

Download Full Size | PDF

5. Conclusion

In summary, we have demonstrated the advantages of frequency domain time-division multiplexing technology in multi-exposure fusion method. Within a single exposure period of the camera, we shift the sub-images with different exposure times to distinct spatial frequency regions, so that a single snapshot image contains image information of different exposure times, we successfully separated images with different exposure values from a single image, and performed HDR image fusion algorithm. To a certain extent, this method overcomes the disadvantage of multiple exposure methods requiring multiple capture images, improves the imaging speed, and at the same time reduces the computer storage space. Instead of storing multiple images for later fusion process, it takes up the memory space of just one image, which is a huge convenience, and it also sounds surprising to separate different exposure images from just one image. Furthermore, this method takes advantage of the modulation speed of DMD, which can not only solve the weakness of multi-exposure fusion technology, but also can be applied in the field of high-speed imaging. At present, researchers have made some achievements on high-speed imaging, and the imaging speed depends on the modulation speed of DMD and the number of modulation directions in the frequency domain, so it has a certain application prospect in spectral imaging and biological microscopic imaging.

Funding

National Key Research and Development Program of China (2018YFB2003501); National Natural Science Foundation of China (51775379).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. J. Lapray, “Exploiting redundancy in color-polarization filter array images for dynamic range enhancement,” Opt Lett 45(19), 5530–5533 (2020). [CrossRef]  

2. B. Liang, D. Weng, Y. Bao, Z. Tu, and L. Luo, “Method for reconstructing a high dynamic range image based on a single-shot filtered low dynamic range image,” Opt Express 28(21), 31057–31075 (2020). [CrossRef]  

3. C. Yu, F. Ji, J. Xue, and Y. Wang, “Adaptive Binocular Fringe Dynamic Projection Method for High Dynamic Range Measurement,” Sensors (Basel) 19(18), 4023 (2019). [CrossRef]  

4. L. Zhang, Q. Chen, C. Zuo, and S. Feng, “High-speed high dynamic range 3D shape measurement based on deep learning,” Optics and Lasers in Engineering 134, 106245 (2020). [CrossRef]  

5. L. Zhang, Q. Chen, C. Zuo, and S. Feng, “Real-time high dynamic range 3D measurement using fringe projection,” Opt Express 28(17), 24363–24378 (2020). [CrossRef]  

6. M. Ikebe and K. Saito, “A wide-dynamic-range compression image sensor with negative-feedback resetting,” IEEE Sens. J. 7(5), 897–904 (2007). [CrossRef]  

7. L. W. Lai, C. H. Lai, and Y. C. King, “A novel logarithmic response CMOS image sensor with high output voltage swing and in-pixel fixed-pattern noise reduction,” IEEE Sens. J. 4(1), 122–126 (2004). [CrossRef]  

8. R. Oi and K. Aizawa, “Wide dynamic range imaging by sensitivity adjustable CMOS image sensor,” 2003 International Conference on Image Processing, Vol 2, Proceedings, 583–586 (2003).

9. X. H. Li, C. K. Sun, and P. Wang, “The image adaptive method for solder paste 3D measurement system,” Optics and Lasers in Engineering 66, 41–51 (2015). [CrossRef]  

10. Z. D. Yang, P. Wang, X. H. Li, and C. K. Sun, “3D laser scanner system using high dynamic range imaging,” Optics and Lasers in Engineering 54, 31–41 (2014). [CrossRef]  

11. W. Feng, F. Zhang, W. Wang, W. Xing, and X. Qu, “Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging,” Appl Opt 56(13), 3831–3840 (2017). [CrossRef]  

12. W. Feng, F. Zhang, X. Qu, and S. Zheng, “Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera,” Sensors 16(3), 331 (2016). [CrossRef]  

13. B. Gu, W. J. Li, J. T. Wong, M. Y. Zhu, and M. H. Wang, “Gradient field multi-exposure images fusion for high dynamic range image visualization,” J Vis Commun Image R 23(4), 604–610 (2012). [CrossRef]  

14. A. A. Goshtasby, “Fusion of multi-exposure images,” Image and Vision Computing 23(6), 611–618 (2005). [CrossRef]  

15. T. Mertens, J. Kautz, and F. Van Reeth, “Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography,” Computer Graphics Forum 28(1), 161–171 (2009). [CrossRef]  

16. Y. Liu and Z. Wang, “Dense SIFT for ghost-free multi-exposure fusion,” J Vis Commun Image R 31, 208–224 (2015). [CrossRef]  

17. N. Sampat, B. K. Gunturk, M. Feldman, and S. Battiato, “Frequency division multiplexed imaging,” Proc. SPIE 8660, 86600P (2013). [CrossRef]  

18. S. R. Khan, M. Feldman, and B. K. Gunturk, “Extracting sub-exposure images from a single capture through Fourier-based optical modulation,” Signal Processing: Image Communication 60, 107–115 (2018). [CrossRef]  

19. M. Gragston, C. D. Smith, and Z. Zhang, “High-speed flame chemiluminescence imaging using time-multiplexed structured detection,” Appl Opt 57(11), 2923–2929 (2018). [CrossRef]  

20. M. Gragston, C. Smith, D. Kartashov, M. N. Shneider, and Z. Zhang, “Single-shot nanosecond-resolution multiframe passive imaging by multiplexed structured image capture,” Opt Express 26(22), 28441–28452 (2018). [CrossRef]  

21. W. McCord, Z. He, N. Williamson, C. Smith, M. Gragston, and Z. Zhang, “Two-phase accurate multiplexed structured image capture (2pAc-MUSIC),” Optics and Lasers in Engineering 142, 106621 (2021). [CrossRef]  

22. N. T. Shaked, V. Micó, M. Trusiak, A. Kuś, and S. K. Mirsky, “Off-axis digital holographic multiplexing for rapid wavefront acquisition and processing,” Adv. Opt. Photonics 12(3), 556 (2020). [CrossRef]  

23. S. Ri, M. Fujigaki, T. Matui, and Y. Morimoto, “Accurate pixel-to-pixel correspondence adjustment in a digital micromirror device camera by using the phase-shifting moiré method,” Appl. Opt. 45, 6940–6946 (2006). [CrossRef]  

24. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, Vols 1 and 2, M. B. Matthews, ed. (Ieee, New York, 2003), pp. 1398–1402.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Fourier transform of sin and square functions. (a) Square function. (c) Sin function. (b) and (d) represent the Fourier transform of (a) and (b) respectively.
Fig. 2.
Fig. 2. Frequency division multiplexed imaging of multi-exposure fusion method
Fig. 3.
Fig. 3. Process of frequency domain multiplexing and separation. Red, yellow, green, and blue circles represent different positions of bandpass filters, the circles having the same color represent the sub-images extracted in the corresponding frequency band.
Fig. 4.
Fig. 4. Optical system prototype and multi-exposure HDR imaging system
Fig. 5.
Fig. 5. Quality Comparison of USAF1951 Using Different Bandpass Filter Based on PSNR and SSIM
Fig. 6.
Fig. 6. Quality Comparison of USAF1951 Using Different Bandpass Filter Based on PSNR and SSIM
Fig. 7.
Fig. 7. Recovery of modulated USAF1951 optical test pattern using Fourier filter. (a) Modulated test pattern, (b) Fourier domain, passband of filter is $0.5 \ast f\textrm{ = }135Hz$, (c) recovered image from yellow passband filter, (d) recovered image from (c).
Fig. 8.
Fig. 8. Multi-exposure HDR method using frequency division multiplexed imaging. (a) Modulated USAF1951 test pattern using four different exposure times, (b) Fourier domain, E refers to the total energy within the bandpass filter of different colors, (c) three-dimensional view of the amplitude in the frequency domain, (d)-(g) recovered sub-images with exposure time of 16 ms, 12 ms, 8 ms, 4 ms, (h) HDR image using sub-images, (i) HDR image using original ones.
Fig. 9.
Fig. 9. Multi-exposure HDR method using coin and defective plate based on frequency division multiplexed imaging.

Tables (2)

Tables Icon

Table 1. Quality Comparison of USAF1951 Using Different Bandpass Filter Based on PSNR and SSIM

Tables Icon

Table 2. Quality Comparison of Original USAF1951 Images and Recovered USAF1951 Images Using FDM method Based on PSNR and SSIM

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

M 1 ( x , y ) = a + b cos ( 2 π u 0 x )
M 2 ( x , y ) = M 2 ( x + T , y ) = { 1 | x ( y ) | < T 1 0 T 1 < | x ( y ) | < T 1 / 2
I ( x , y ) = i = 1 n α i M i ( x , y ) t i 1 t i I i ( x , y , t ) d t  =  i = 1 n α i M i ( x , y ) I i ( x , y )
F [ I i ( x , y ) M i ( x , y ) ] = I ~ i ( u , v ) n =  -   +  2 sin ( n u 0 T 1 ) n δ ( u n u 0 )
M S E = 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) I K ( i , j ) ] 2
P S N R = 20 log 10 ( M A X I M S E )
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2  +  σ y 2 + c 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.