Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quaternary pulse width modulation based ultra-high frame rate scene projector used for hard-ware-in-the-loop testing

Open Access Open Access

Abstract

The scene projector (SP) can provide simulated scene images with same optical characteristics as the real scenes to evaluate imaging systems in hard-ware-in-the-loop (HWIL) simulation testing. The single scene generation device (SGD) based SP typically projects 8-bit images at 220 fps, which is insufficient to fulfill the requirements of ultra-high frame rate imaging systems, such as star trackers and space debris detectors. In this paper, an innovative quaternary pulse width modulation (PWM) based SP is developed and implemented to realize the ultra-high frame rate projection. By optically overlapping modulation layers of two digital micro-mirror devices (DMDs) in parallel, and illuminating them with light intensities, a quaternary SGD is built up to modulate quaternary digit-planes (QDs) with four grayscale levels. And the quaternary digit-plane de-composition (QDD) is adopted to decompose an 8-bit image into 4 QDs. In addition, the exposure time of each QD is controlled by quaternary PWM, and the base time is optimized to 8 µs. The experimental results prove that the total exposure time of all QDs sequentially modulated by quaternary PWM is approximately 760 µs, namely projecting 8-bit images at 1300 fps. The quaternary PWM using two DMDs in parallel dramatically improves the grayscale modulation efficiency compared to the existing projection technologies, which provides a new approach for the SP design with ultra-high frame rate.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Scene projectors (SPs) are able to provide simulated scene images with the same optical characteristics as real scenes to verify the reliability of airborne or spaceborne imaging systems during hard-ware-in-the-loop (HWIL) simulation testing, which have been extensively used in the aerospace industry [1]. With the maturity of imaging detectors, some high-speed imaging systems including star trackers and space debris detectors have acquired 8-bit images at 1000 fps, thereby SPs should project 8-bit images at higher frame rate to satisfy the test requirements.

In a SP, the scene generation device (SGD) determines projection frame rate and bit-depth, while field of view, exit pupil distance and working band of projection optics need to be matched with the unit under test (UUT). Liquid crystal display (LCD) [2] and liquid crystal on silicon (LCoS) [3] are two kinds of SGDs generally selected in a visible SP, and the relatively mature SGDs operating in infrared bands includes infrared LCoS [4], laser diode array (LDA) [5], resistor array [6] and blackbody micro cavity array (BMCA) [79]. However, these SGDs, whether in the infrared or visible wave band, only can generate 8-bit grayscale images at less than 250 fps, which limits the application of SP for ultra-high frame rate imaging systems. In recent year, a chip-scale light emitting diode (LED)-on-complementary metal-oxide-semiconductor (CMOS) was developed to satisfy demands for applications including computational imaging, fluorescence microscopy and highly parallel data communications [10]. Although the 128 × 128 micro-LED array can project binary patterns at up to 0.5 Mfps and has a 5-bit grayscale resolution updated at rates up to 83 kfps, the pixel array size determines that it is not well suitable for the traditional projector design driven by image quality. In addition, in order to discuss limits of vection for ultra-high frame rate visual displays, a novel image rendering method was proposed to simulate refresh rates ranging from 15 fps to 480 fps [11]. Four frames were prerendered on individual quadrants of each of a 1080P image, thus these individual quadrants are able to present four consecutive frames of 960 × 540 pixels. When a digital light processing (DLP) LED projector projected 1080P images at 120 fps, the refresh rate of 480 fps was achieved. Obviously, this method of increasing frame rate at the cost of sacrificing spatial resolution is also meaningless for the conventional projection display.

Compared to other SGDs, the micro-mirror device (DMD) as a reflective SGD can be used for projection display in various wavebands from ultraviolet to infrared as long as the window matches the waveband [12,13]. Besides, DMD has also been used in structured-light-based three-dimensional (3D) sensing, interactive projection mapping, and other geometric and photometric applications [1416]. J. Takei proposed a high frame rate 3D shape measurement system based on structured light projection, which consisted of a DMD projector and a high-speed camera. And an image processing algorithm was introduced to detect and track the projected light pattern, finally achieving high frame 3D image measurement [17]. A.B. Ayoub used a DMD to perform high speed, complex wavefront shaping. An electrooptic modulator was used in synchronization with the DMD in an amplitude modulation mode to create grayscale patterns at 833 fps [18]. Y. Liu proposed a structured light vision system of fast-compensated-motion, which utilized the high frame rate projection to achieve 3D shape measurement at 500 fps [19].

The micromirror array of DMD is located at the top of a CMOS memory unit, by storing ‘1’ and ‘0’, each micromirror is set to the on or off state to orientate to pass or block incident light, thereby projecting binary images. Due to the binary characteristic of DMD, an n-bit image needs to be decomposed into a series of bit-planes, and the number of bit-planes is proportional to the bit-depth. When a series of bit-planes are sequentially projected and time-integrated into the viewer's eye, ranking from the least significant bit-plane (LSB) to the most significant bit-plane (MSB), the grayscale image is perceived [20]. The binary pulse width modulation (PWM) is commonly used to control the exposure time of each bit-plane, which can be described as the exposure time of the latter bit-plane being twice that of the previous one, consequently reducing the maximum achievable projection frame rate inversely with the bit-depth of the desired n-bit image [21]. The binary PWM fixes the relationship between frame rate and bit-depth for modulating grayscale images, which is also the essential reason why the binary SGD cannot break through 250 fps when projecting 8-bit images. Based on the principle of binary PWM, there are two potential approaches to improve the frame rate of DMD.

A. Compression of the base time in binary PWM

Typically, for a DMD with 1024 × 768 pixels, loading the binary image into the entire pixels requires 30.72 µs, which is usually set to the minimum exposure time of bit-planes, namely base time t0, to ensure that the loading of the next bit-plane is able to be completed when the previous one is displaying. Figure 1 shows the binary PWM based grayscale modulation process of an 8-bit image, where t0 is also the exposure time of LSB. Assuming that the grayscale value of a pixel in the 8-bit image is 108. In the figure, it can be seen that during exposure periods of 4 bit-planes therein, the corresponding micro-mirror of the pixel should switch to the off-state, thus the actual exposure time of the pixel is 108 t0. By controlling the actual exposure time of each pixel through binary PWM, the 8-bit image with 256 grayscale levels is able to be created. However, the total exposure time of 8 bit-planes is still 255 t0. The base time is a critical factor to increase the frame rate. As the base time decreases, the frame rate would raise linearly. After flipping, the micromirror needs to maintain the current state for at least 8 µs before flipping again. If 8 µs is set to the base time, only the first three bit-planes do not meet the demand for loading time. Accordingly, the “block clear” operation that loads the entire DMD to zero could be performed after exposure of a bit-plane to stop display until the next bit-plane finished loading. As a consequence, the frame rate of projecting 8-bit images can be increased by three times. Although the base time is theoretically compressed to the minimum, the frame rate cannot break through 500 fps.

 figure: Fig. 1.

Fig. 1. Binary PWM based grayscale modulation process of an 8-bit image

Download Full Size | PDF

B. Binary hybrid light modulation:

The traditional binary PWM suffers from low frame rate due to the exponential dependence between the exposure time of a grayscale image and the desired bit depth. By incorporating binary PWM with binary light intensity modulation, the number of bit-planes assigned to binary PWM is able to be decreased, thereby dramatically reducing the total exposure time to project a grayscale image. J. R. Chang named the method hybrid light modulation (HLM), which provides a broader design space where grayscale levels can be created not just by blocking light using a DMD but also via light intensity control of the light source [22]. The grayscale modulation process of binary HLM for an 8-bit image is shown in Fig. 2. The binary light intensity modulation is applied to the first four bit-planes, and other bit-planes are assigned to binary PWM. The exposure times of the first four bit-planes remain at t0, while the previous bit-plane is illuminated with the light intensity set at 1/2 times that the next one. Hence, the light source intensity takes the values 1/16, 1/8, …, 1/2 times the non-coding level. From 5-th bit-plane, the light intensity in-creases to the non-coding one, but the exposure time starts to extend exponentially, from t0 to 8 t0. In this case, the total exposure time decreases to 19 t0, even without compressing the base time, the frame rate is able to exceed 1500 fps. However, the increase in frame rate comes at the cost of sacrificing projection brightness. When the light intensity modulation is performed on the first four bit-planes, the projection brightness of the same grayscale level should reduce by 16 times compared to binary PWM. Furthermore, the blackbody with wider waveband cannot synchronize with DMD, making the binary HLM unsuitable to the design of infrared SP. Actually, the binary HLM is more suitable for being incorporated into existing methods that increase contrast ratio, such as advanced prism designs [23,24] or reallocating the light distribution [25], to realize the high dynamic range (HDR) projection display in civilian field.

 figure: Fig. 2.

Fig. 2. Binary HLM based grayscale modulation process of an 8-bit image

Download Full Size | PDF

By using dual SGDs in series is another effective approach to relieve the conflict between projection frame rate and bit-depth. The outgoing light is modulated twice, and the projection bit-depth is equal to the product of the bit-depths of two SGDs [2630]. Despite this method is mainly used for HDR projection display to dramatically increase the bit-depth, it could also be utilized to raise the frame rate when having no interest in bit-depth. As for 8-bit projection, two SGDs should be driven by two customized 4-bit images. Since each SGD only needs to modulate 4 bit-planes, the frame rate can even break through 2000 fps in theory. However, this method suffers from some problems that make it difficult to popularize in designing SPs, especially infrared wavebands. Firstly, total energy loss is the product of the optical efficiency of each SGD. The increase in frame rate will inevitably lead to the decrease in the projection brightness, thus the optics should have higher optical efficiency to release the pressure of light source intensity. Secondly, the complexity of splitting algorithm against the 8-bit images is far more than the pure digital processing. And the reliability of splitting algorithm depends on the calibration of the response function of each SGD, the actual bit-depth of the projected images may be less than 8-bit. Finally, the pixel-level alignment of two SGDs is hardly to achieve. An extra relay optics introduces distinct optical distortion for each SGD, thereby imposing additional difficulties in realizing perfect alignment.

In this paper, an innovative quaternary PWM based SP was developed and implemented. A quaternary SGD using two DMDs in parallel was built up to modulate quaternary digit-planes (QDs) with four grayscale levels. And the quaternary digit-plane de-composition (QDD) based splitting algorithm and the quaternary PWM are adopted to realize the grayscale modulation with high speed. Meanwhile, the pixel-level alignment of two DMDs is discussed in detail. The prototype proves that the quaternary PWM based SP can project 8-bit images at 1300 fps. The proposed method fundamentally improves the grayscale modulation efficiency compared to binary spatial light modulation modes, and avoids the drawbacks of existing technologies, such as brightness loss caused by intensity modulation of light source and the low optical efficiency, difficulty in pixel-level alignment, and complex image rendering brought by two spatial light modulators (SLMs) in series. Due to the reflective characteristic of DMD, the proposed method can be easily popularized in designing projectors working in various wavebands.

2. Principle of quaternary pulse width modulation

The principle of binary PWM indicates that the projection frame rate is determined by two critical factors, the base time and the number of bit-planes decomposed by a n-bit im-age, especially the latter one. The exposure time of the MSB is equal to the total exposure time of other bit-planes. And the fewer bit-planes modulated by PWM, the shorter the time it takes to modulate a grayscale image. When using binary bit-plane decomposition (BBD) algorithm to process a grayscale image, the number of bit-planes is a constant decided by bit-depth. To reduce the number of sub-images decomposed by a n-bit image, we have to decompose the image by QDD. Any gray value in a n-bit image can be expressed as

$$G(x,y) = \sum\nolimits_{i = 1}^{{n / 2}} {{g_i}} (x,y){4^{i - 1}},\forall {g_i} \in [0,1,2,3]$$
where the summation represents the binary expression of the grayscale value of a pixel with coordinate (x, y) on the desired n-bit image, i represents the order of a quaternary digit. For example, the grayscale value of a pixel in a n-bit image is 108, which is able to be represented as 1 × 43 + 2 × 42 + 3 × 41 + 0 × 40 or the base-4 number 1230. All the pixel grayscale values of an 8-bit image are able to be converted to quaternary representations. If we traverse all the base-4 grayscale values and extract the figures at the same digit, a QD is created. An 8-bit image can be decomposed into 4 QDs, which are half that the bit-planes decomposed by BBD, and we refer to this process as QDD.

In order to modulate QDs, similar to modulating bit-planes by a binary SGD, the two DMDs in parallel is utilized to build up a quaternary SGD, where the two DMDs are required to be synchronously controlled and aligned to optically overlay their modulation layers. The aligned two DMDs have the capability of ternary spatial light modulation, and the aligned micro-mirrors have three different combinations, namely two off-state micromirrors, two on-state micromirrors, one on-state micromirror and one off-state micromirror. In order to distinguish the last combination, the illumination intensity of DMD-1 is set to double that of DMD-2. As shown in Table 1, the gray value “0” can be projected by two off-state micromirrors and the gray value “3” can be projected by two on-state micromirrors. When the micromirror of DMD-1 is off-state and the corresponding micro-mirror of DMD-2 is on-state, the gray value “1” can be projected. When the micromirror of DMD-1 is on-state and the corresponding micromirror of DMD-2 is off-state, the gray-scale value “2” can be projected. Accordingly, the four gray levels contained in a QD can be expressed, namely modulating a QD.

Tables Icon

Table 1. Grayscale values projected by two DMDs in parallel

The optical schematic of quaternary SGD is shown in Fig. 3. The optics consists of two identical illumination lenses, a projection lens, a cube prism as the combiner, two total internal reflection (TIR) prism groups. In the illumination lens, a condenser with two aspheric surfaces is used to collect and collimate the light emitted by LED, and the collimated beam is shaped and homogenized through two multi-lens arrays placed back-to-back to form a rectangular spot on the active area of DMD. The TIR prism group in each optical path make sure the on-state beam gets into the projection lens and the off-state beam away from the projection lens to greatly improve the projection contrast. The beams modulated by each DMD are combined by a non-polarized cube prism with a reflection/transmission ratio of 50/50. Then the combined beam is collimated by the projection lens. An industrial camera is used to receive projected images and evaluate the performance of the SP prototype. The projection lens of the SP prototype and the imaging lens of the camera are able to realize the mapping relationship between camera detector and DMDs. In practical, the SP should provide the scenes at infinity for the UUT in the HWIL simulation testing. Therefore, the optical configuration of the designed SP prototype is in line with the actual needs of UUT.

 figure: Fig. 3.

Fig. 3. Optical schematic of quaternary SGD

Download Full Size | PDF

Compared to other SGDs, the special illumination angle of DMD greatly increase the design difficulty of the DMD-based projection optical engine. In order to reduce design costs, existing dual-DMD based projection optical engines commonly adopt spatial stereo layout instead of using TIR prisms to separate on-state and off-state beams. However, the ultra-long projection and illumination working distances significantly reduce the compactness of the optical engine. Considering the assembly and alignment of the opto-mechanical system comprehensively, two TIR prism groups and a cube prism are ultimately introduced to connect the illumination and projection modules. During the optimization, the TIR prism group needs to be unfolded into a parallel-plate, and optical designs of the illumination optics and projection optics should take into account the spherical aberration introduced by the unfolded parallel-plate. Although the illumination optics and projection optics are optimized separately due to the use of TIR prisms, they still have equal Lagrange invariant to match with each other. The illumination exit pupil needs to be set on the active area of DMD, coinciding with the projection entrance pupil. Micromirrors of a DMD flip ±12° around the diagonal corresponding to the on or off-state. Accordingly, the illumination axis folded by the TIR prism group should keep 24° relative to the projection axis, and 12° relative to the normal direction of the on-state micro-mirror. Meanwhile, the plane where the two axes locate in is required to be perpendicular to the diagonal direction, ensuring that the on-state beam exits along the projection axis. In practical design, each illumination optics, including the TIR prisms, is rotated 45° around the projection axil to meet the illumination demands of DMD. Moreover, the multi-lens arrays should be rotated counterclockwise by 45° around the illumination axis to compensate for the position change of the rectangular spot caused by the rotation of illumination optics. The specific design parameters of the projection lens are determined by the imaging lens, thereby making the projection parallel beam fill the entrance pupil of the imaging lens. We present the optical design results of the optical design results of projection optics in Fig. 4.

 figure: Fig. 4.

Fig. 4. Optical design results of the projection optics. (a) optical transfer function (OTF) curves of all the fields: moduli of the OTF are better than 0.6 at 37 C/mm; (b) distortion curves of wavelengths: distortions are less than 1%. The projection field of view and the projection exit pupil are ± 5.75° and 31 mm respectively. The pixel size of DMD is 13.68 µm.

Download Full Size | PDF

With the help of the quaternary SGD, 4 QDs decomposed by an 8-bit image can be projected sequentially, from the least significant QD to the most significant QD. After projecting all QDs, the total charges generated by a pixel on the camera can be expressed as [31].

$$Q(x,y) = k{t_0}E\sum\nolimits_{i = 1}^{{n / 2}} {{g_i}({x,y} )} {4^{i - 1}},\forall {g_i} \in [{0,1,2,3} ]$$
where k is the photoelectric conversion coefficient related to the specific detector, E is the illuminance constant determined by the light source of projector, gi corresponds to four combinations of the micro-mirror states, and the summation represents the quaternary expression of the grayscale value of a pixel with coordinate (x, y) on the desired image. For instance, the grayscale value of a pixel on an 8-bit image is 108, according to Eq. (2), the corresponding Q(x, y) is 108kt0E. Ultimately, the total photogenerated charges will be con-verted into the grayscale value through analogue-to-digital conversion (ADC). An n-bit image has 4n/2 possible grayscale values, thus Q has 2n possible values, and the projected intensity resolution of a micro-mirror is also 2n. It is worth noting that the exposure time of the latter QD must be extended to four times that of the previous one.

In order to drive two DMDs reasonably, an 8-bit image should be split into two 4-bit images, and the specific splitting algorithm contains three steps:

Firstly, the 8-bit image is converted into a digital matrix composed of base-10 gray-scale values, which is further transformed into a base-4 digital matrix according to Eq. (2). Then the QDD is performed on the base-4 digital matrix to obtain 4 digital matrices of QDs as shown in Fig. 5 (a). And the visual images of QDs are shown in Fig. 5 (b).

 figure: Fig. 5.

Fig. 5. Process of QDD: (a) digital matrix decomposition (b) visual images of QDs.

Download Full Size | PDF

Secondly, each QD is divided into two binary images composed of 0 and 1 to ensure each QD can be modulated by the quaternary SGD as shown in Fig. 6. According to Table 1, if the grayscale value gi of the pixel in a QD is “0”, the corresponding pixel on each binary image is set to “0”; if gi is “3”, the corresponding pixel on each binary image is set to “1”; if gi is “2”, the corresponding pixel on the binary image modulated by DMD-1 is set to 1 and another one modulated by DMD-2 is set to 0; if gi is “1”, the pixel modulated by DMD-1 is set to “0” and the other is set to “1”. In accordance with the rule, 4 QDs are divided into two sets of binary images.

 figure: Fig. 6.

Fig. 6. Binary decomposition of QD

Download Full Size | PDF

Finally, each set of binary images are regarded as bit-planes decomposed by a 4-bit image via BBD. Accordingly, each set is able to be synthesized into 4-bit image through using BBD in reverse. The 8-bit image is eventually split into two 4-bit images, which will be uploaded to two DMDs respectively, and further decomposed into two sets of bit-planes through BBD, thus driving respective DMD as shown in Fig. 7. And we only need to program an FPGA to adjust the exposure times of bit-planes.

 figure: Fig. 7.

Fig. 7. Binary composition of binary images

Download Full Size | PDF

When two bit-planes split by the same QD are synchronously projected, the desired QD is generated. And the exposure times of bit-planes split by different QDs must be set from t0 to 64 t0. Once 4 QDs are sequentially projected, the desired 8-bit image is obtained. We refer to this quaternary spatial light modulation as quaternary pulse width modulation (QPWM), and the specific grayscale modulation process is shown in Fig. 8. Assuming that the grayscale value of a pixel in the 8-bit image is 108, which is split into two base-2 grayscale values, namely 0110 modulated by DMD-1 and 1010 modulated by DMD-2. The charges produced by projection result of DMD-1 is expressed as

$$Q{(x,y)_{\textrm{DMD - 1}}} = 2k{t_0}E\sum\nolimits_{{i_1} = 1}^{{n / 2}} {{g_{{i_1}}}({x,y} )} {4^{{i_1} - 1}} = 40k{t_0}E,\forall {g_i}_{_1} \in [0,1]$$

 figure: Fig. 8.

Fig. 8. QPWM based grayscale modulation process of an 8-bit image

Download Full Size | PDF

The charges produced by projection result of DMD-2 is expressed as

$$Q{(x,y)_{\textrm{DMD - 2}}} = k{t_0}E\sum\nolimits_{{i_2} = 1}^{{n / 2}} {{g_i}_{_2}({x,y} )} {4^{{i_2} - 1}} = 68k{t_0}E,\forall {g_{{i_2}}} \in [0,1]$$

According to Eq. (3) and Eq. (4), the total charges produced by the pixel of camera is 108kt0E, which is the same as the Eq. (2). And the total exposure time is 85t0. As a comparison, when the DMD-2 is used to project the same 8-bit image using the binary PWM, the charges produced by the same pixel on the camera is expressed as

$$Q{(x,y)_{\textrm{DMD - 2}}} = k{t_0}E\sum\nolimits_{{i_2} = 1}^n {{g_{{i_2}}}({x,y} )} {2^{{i_2} - 1}} = 108k{t_0}E,\forall {g_{{i_2}}} \in [0,1]$$

According to Eq. (5), the produced charges is still 108kt0E, while the total exposure time raises to 255t0. Compared to binary PWM, the frame rate of the quaternary PWM based projection display is increased by three times without reducing the projection brightness. More importantly, there is no need to modulate the illumination light intensity for each DMD, which is the significant advantage the binary HLM does not have. If the base time is compressed to 8 µs, the frame rate of 8-bit projection will easily break through 1300 fps. However, as the projection bit-depth increases, the frame rate also decreases exponentially. When the bit-depth exceeds to 14-bit, the intensity modulation of the light source should be introduced to reduce the number of QDs assigned to QPWM.

3. Pixel-level alignment between two DMDs

In order to enable the quaternary SGD to achieve high-precision modulation, the modulation layers of two DMDs should be optically overlapped pixel by pixel. In practice, the initial alignment mainly depends on the purely mechanical adjustments, which can-not achieve the pixel-level alignment at all. Even small axis or lateral shifts can cause visible artifacts and reduce resolution of projected image [32].

3.1 Camera-based calibration method

To accurately achieve alignment between the pixels of two DMDs, we proposed a camera-based calibration method. According to the working principle of Zhang's camera calibration [33], the relationship between the world coordinate of a DMD pixel and the pixel coordinate of its projected image can be expressed as

$$\left[ {\begin{array}{{c}} \mu \\ \nu \\ 1 \end{array}} \right] = \frac{1}{{{Z_C}}}\left[ {\begin{array}{{ccc}} {{m_{11}}}&{{m_{12}}}&{{m_{13}}}\\ {{m_{21}}}&{{m_{22}}}&{{m_{23}}}\\ {{m_{31}}}&{{m_{32}}}&{{m_{33}}} \end{array}} \right]\left[ {\begin{array}{{c}} {{X_w}}\\ {{Y_w}}\\ 1 \end{array}} \right] = \frac{1}{{{Z_C}}}M\left[ {\begin{array}{{c}} {{X_w}}\\ {{Y_w}}\\ 1 \end{array}} \right]$$
where M-matrix is the projection homographic matrix between the homogeneous world coordinate (Xw, Yw, 1) of DMD pixel and the homogeneous pixel coordinate (µ, ν, 1) of pro-jected image, and 1/ZC is the scale factor.

Due to the coincidence of pupils, the projection lens and imaging lens of camera could be considered as a whole optical system. Accordingly, the projection process of two DMDs is able to be regarded as acquiring two pictures of DMD panels with the same feature points. Therefore, an affine transformation can perform on a pair of feature points that locate on the respective picture. The transformation relationship between them is defined as a 3 × 3 homography H-matrix, which can be expressed

$$\left[ {\begin{array}{{c}} {{\mu_1}}\\ {{\nu_1}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{{ccc}} {{h_{11}}}&{{h_{12}}}&{{h_{13}}}\\ {{h_{21}}}&{{h_{22}}}&{{h_{23}}}\\ {{h_{31}}}&{{h_{32}}}&{{h_{33}}} \end{array}} \right]\left[ {\begin{array}{{c}} {{\mu_2}}\\ {{\nu_2}}\\ 1 \end{array}} \right] = H\left[ {\begin{array}{{c}} {{\mu_2}}\\ {{\nu_2}}\\ 1 \end{array}} \right]$$

Due to the two identical DMDs in parallel, the pictures of DMDs have the same non-linear distortions. In order to ensure that the projected image of DMD-2 can overlap with the projected image of DMD-1 pixel-by-pixel, the specific affine transformation is able to be performed on the world coordinates of DMD-2, which can be expressed as

$${\left[ {\begin{array}{{ccc}} {{X_{w2}}^{\prime}}&{{Y_{w2}}^{\prime}}&1 \end{array}} \right]^T} = {M_2}^{ - 1}H{M_2}{\left[ {\begin{array}{{ccc}} {{X_{w2}}}&{{Y_{w2}}}&1 \end{array}} \right]^T}$$
where the right side is a homogeneous coordinate before transforming and the left side is the transformed coordinate, M2 is the projection homographic mapping transformation matrix between the world coordinates of DMD-2 and the pixel coordinates of DMD-2. Furthermore, M-matrix is an invertible matrix [34], thus the H-matrix is the similar matrix of M2−1HM2-matrix. It means that the same linear mapping of the corresponding feature points of two DMDs is algebraic expression in different bases. When the images uploaded to DMD-2 are affine transformed by the homography matrix H, the projected images of DMD-2 can overlap with that of DMD-1, namely achieving pixel-level alignment between two DMDs.

3.2 Calculation algorithm of H-matrix

In order to calculate the H-matrix between DMD pictures, which need to be registered, the image registration process is shown in Fig. 9. Firstly, scale invariant feature trans-form (SIFT) algorithm is devoted to detect feature points; then the fast library for approximate nearest neighbors (FLANN) matching strategy is used to match feature points; finally, random sample consensus (RANSAC) algorithm is adopted to remove wrong matching points and calculate the parameters of H-matrix.

 figure: Fig. 9.

Fig. 9. The algorithm of image registration

Download Full Size | PDF

SIFT algorithm includes 4 basic steps: Firstly, a scale space extremum is structed by the Difference of Gaussian (DoG). Secondly, the extreme value points detected in the DOG space are fitted to determine the exact position and scale of the key points, while the key points with low contrast and unstable edge response are removed to enhance the matching stability and improve the anti-noise ability. Thirdly, the key point orientation is assigned based on local image gradients. Lastly, a descriptor generator can be obtained to compute the local image descriptor for each key point based on image gradient magnitude and orientation.

For example, we used SIFT to detect the overall feature points and local feature points of a grayscale image, as show in Fig. 10. Since only matching the selected part of the overall feature points with the local feature points, the pairs of feature points were required to be filtered.

 figure: Fig. 10.

Fig. 10. SIFT feature point detection: global feature points shown on the left, local feature points shown on the right.

Download Full Size | PDF

In order to achieve initial feature matching, we established a matching strategy via FLANN, and the main steps are as follows:

  • (1) Assuming that I1(x,y) and I2(x,y) are the feature point sets of the two images that need to be matched. And points P, Q belonging to I2 are the nearest neighbor feature point and the second nearest neighbor feature point of the feature point A in I1, respectively. And the corresponding feature vectors are FP, FQ, FA. Similarly, points R and S belonging to I1 are the nearest neighbor feature point and the second nearest neighbor feature point of the feature point B in I2. And the corresponding feature vectors are FR, FS, FB. In addition, i represents the dimension of Euclidean space. The euclidean distance of dAP, dAQ, dBR and dBS can be expressed as
    $$\left\{ \begin{array}{l} {d_{AP}} = \sqrt {\sum\limits_{i = 1}^{64} {{{[{{F_A}(i) - {F_P}(i)} ]}^2}} } \\ {d_{AQ}} = \sqrt {\sum\limits_{i = 1}^{64} {{{[{{F_A}(i) - {F_Q}(i)} ]}^2}} } \end{array} \right.,\left\{ \begin{array}{l} {d_{BR}} = \sqrt {\sum\limits_{i = 1}^{64} {{{[{{F_B}(i) - {F_R}(i)} ]}^2}} } \\ {d_{BS}} = \sqrt {\sum\limits_{i = 1}^{64} {{{[{{F_B}(i) - {F_S}(i)} ]}^2}} } \end{array} \right.$$
  • (2) By calculating the distance ratio R1 and R2, and comparing them with threshold T, when both of them are less than T, the feature points A and B are matched successfully, R1 and R2 are written in
    $${R_1} = \frac{{{d_{AP}}}}{{{d_{AQ}}}},{R_2} = \frac{{{d_{BR}}}}{{{d_{BS}}}}$$

The FLANN matched results under different thresholds T are shown in the Fig. 11. It can be seen that in the figures, the smaller the threshold T is, the fewer the total amount of the matched pairs is, and the registration accuracy is improved. As the threshold T increases, the total amount of the matched pairs begins to expand, and the mismatching occurs. To maintain the number of effective matched pairs, the threshold T is set as 0.9. Meanwhile, the RANSAC algorithm is used to eliminate mismatches and solve the affine transformation parameters of the matched pairs.

 figure: Fig. 11.

Fig. 11. The matching effects of FLANN at different thresholds

Download Full Size | PDF

As for RANSAC algorithm, four matched pairs are randomly selected to take as a sample, they are used to iteratively calculate the H-matrix show in Eq. (7). If the calculated matrix is the best result, the corresponding cost function is the smallest, which can be ex-pressed as

$$f({\mu _{1i}},{\nu _{1i}},{\mu _{2i}},{\nu _{2i}}) = {\sum\nolimits_{i = 1}^n {\left( {{\mu_{2i}}\frac{{{h_{11}}{\mu_{1i}} + {h_{12}}{\nu_{1i}} + {h_{13}}}}{{{h_{31}}{\mu_{1i}} + {h_{32}}{\nu_{1i}} + 1}}} \right)} ^2} + {\left( {{\nu_{2i}}\frac{{{h_{21}}{\mu_{1i}} + {h_{22}}{\nu_{1i}} + {h_{23}}}}{{{h_{31}}{\mu_{1i}} + {h_{32}}{\nu_{1i}} + 1}}} \right)^2}$$

The exact results of RANSAC algorithm are shown in Fig. 12. The mismatches caused by the bigger threshold T are eliminated, and the registration accuracy is significantly improved.

 figure: Fig. 12.

Fig. 12. The effect of RANSAC matching

Download Full Size | PDF

We further test the registration reliability of the comprehensive algorithm for the same grayscale image in different exposures, scales, and rotations. Figure 13 (a) and (b) show the feature point detections of two images. And Fig. 13 (c) shows the initial matching of FLANN, which contains some mismatches. Figure 13 (d) shows the exact matching of RANSAC, the mismatches are eliminated. Table 2 shows the statistics of the registration, the numbers of final matched pairs using RANSAC in different condition are kept ±3% comparing with their average.

 figure: Fig. 13.

Fig. 13. The matches of feature points in different exposures, scales, and rotations.

Download Full Size | PDF

Tables Icon

Table 2. The match result of feature points

3.3 Alignment process of two DMDs

The pixel-level alignment process is shown in Fig. 14 (a). The image uploaded to DMD-2 should be flipped due to the odd number of reflections in optical path. And the original image and the flipped image are uploaded to DMD-1 and DMD-2, which were separately projected into the camera. Since the resolution of camera is 2448 × 2048, after capturing the projected images, they were pre-processed to restore the resolution to 1024 × 768. Then we matched their feature points and calculated the H-matrix between two im-ages on the basis of the above comprehensive algorithm. The flipped image uploaded to DMD-2 was transformed by the calculated H-matrix. Once the transformed image was re-uploaded to DMD-2 and synchronously projected with the image uploaded to DMD-1, two images will overlap on the detector of the camera. Figure 14 (b)shows that the two captured images are not completely overlapped before performing the alignment process. After the pixel-level alignment, the artifacts existing in the overlapped image disappear and the two images visually coincide, as shown in Fig. 14 (c). There are fewer distortions in the overlapped image after alignment, which is caused by the distortion of the projection lens itself. Due to the same optical path for each DMD, the distortion of the projection lens not affect the alignment accuracy in fact, which is also a significant advantage to reduce the difficulty of pixel-level alignment of two DMDs.

 figure: Fig. 14.

Fig. 14. Pixel-level alignment of two DMDs: (a) process, (b) before alignment, (c) after alignment.

Download Full Size | PDF

The coordinates of 20 random matched pairs are extracted after alignment, and coordinate errors in the X direction and Y direction are solved respectively. Figure 15 (a) and (b) show that the coordinate average errors of matched feature points in two directions are 0.23 pixels and 0.21 pixels, respectively. In summary, the results prove that the pixel-level alignment between two DMDs has been well realized.

 figure: Fig. 15.

Fig. 15. Alignment results of two DMDs: (a) in X direction (b) in Y direction.

Download Full Size | PDF

4. Experimental results

The prototype of SP is show in Fig. 16 (a). As the axial distance between DMD and TIR prism group was only 1 mm, the DMD assembling was connected to the support of the TIR prism through a customized plate. Other lens elements all had their own cells, which were reprocessed by alignment turning system before putting into the lens barrels to satisfy the demand of the centration error less than 0.015 mm. And the assembly and alignment of the lenses were conducted under the monitoring of the centering device to obtain the desired tolerances of tilt and air gap. In addition, each illumination lens was mounted on the respective support through an 8.3° wedge to ensure that the illumination axis is perpendicular to the incident surface of the TIR prism. Ultimately, all modules were positioned and installed on an L-shaped plate, making up an opto-mechanical system with high integration.

 figure: Fig. 16.

Fig. 16. Prototype of SP: (a) opto-mechanical system, (b) experimental setup.

Download Full Size | PDF

4.1 Projection bit-depth

The experimental setup is shown in Fig. 16 (b). A commercial current controller was chosen to realize the current-driven of LEDs, and its external modulation input sup-ported the synchronization between LED and DMD. Furthermore, two “DLP7000 DLP 0.7 XGA 2× LVDS Type A” DMDs also need to be synchronized. DMD-1 was set as the internal synchronized mode for outputting the synchronization signal, and the other was set as the external synchronized mode for receiving the signal. A 12-bit industrial camera with a resolution of 2448 × 2048 was chosen for image capture, guaranteeing the adequate integration time and resolution to verify the bit-depth of projected image. Figure 17 shows the original image and the captured projection image, respectively. Since the camera position is not standard, the captured image shown in Fig. 17 (b) tilts up. Due to stray light and camera position, the contrast and the sharpness have decreased compared with the original image shown in Fig. 17 (a). While all the subtle structure details of the original image are restored in the captured image.

 figure: Fig. 17.

Fig. 17. Image comparison of experimental results (a) original image (b) projected image

Download Full Size | PDF

All the grayscale values of the projected image are extracted to quantitively evaluate the projection quality. The grayscale distributions of the original image and the projected image are shown in Fig. 18 (a) and Fig. 18 (b). The region without information in the captured image was segmented out, and the affine transformation was performed on the segmented region to obtain the restored image with 1024 × 768 pixels. Then we subtracted the restored image from the original image, the residual map of the grayscale distribution is shown in Fig. 18 (c). In the figure, the depressed part and the convex part separately represent the increase and decrease of grayscale values in the restored image. And the corresponding color residual map is shown in Fig. 18 (d). It can be seen that there are significant differences between grayscale values of two images in the edge regions where the grayscale values change rapidly, which are caused by reducing the resolution of the captured image. Besides, the contrast and sharpness also lead to the local changes in grayscale values of the restored image comparing to the original image. In addition to the projection contrast, all other issues are caused by the camera. Overall, the grayscale differences between two images concentrate within the range of -20 to +20, validating the projection quality of the quaternary PWM based SP.

 figure: Fig. 18.

Fig. 18. Grayscale distributions of two images: (a) original image, (b) projected image, (c) residual map expressed in grayscale, (d) residual map expressed in color

Download Full Size | PDF

4.2 Projection frame rate

In order to further raise the frame rate, the base time is compressed to 8 µs by optimizing the quaternary PWM timing sequence of each DMD, as shown in Fig. 19 (a). When the Bit-1 image completes loading, the DMD starts resetting for 4.5 µs. Then the Bit-1 image display lasts for 8 µs, which is shorter than the loading time 30.72µs of Bit-2 image. Therefore, the “block clear” operation that loads the entire DMD to zero was performed after the display of Bit-1 image to stop display until the Bit-2 finished loading. Except for Bit-1 image, the display times of other bit-planes are all longer than the loading time. Considering the switching time of micro-mirrors and the duration of the “block clear”, the total exposure time of projecting an 8-bit image is theoretically 763.94 µs, corresponding to the frame rate of 1300 fps.

 figure: Fig. 19.

Fig. 19. Frame rate improvement by optimizing QPWM timing sequence: (a) optimized timing sequence, (b) testing result of frame rate.

Download Full Size | PDF

A customized 8-bit image with all pixel grayscale values of 192 was used to test the frame rate of the projection display. According to the principle of quaternary PWM, the 8-bit image would be sequentially projected by the SP prototype, from the least significant QD to the most significant QD. It is worth nothing that the grayscale values of the most significant QD were all ‘3,’ while other QDs were ‘0’. Therefore, each frame of the 8-bit image could generate the light signal with alternating brightness and darkness. A photodiode detector was placed at the projection exit pupil and connected to the oscilloscope by a resistance device. Once the customized 8-bit image was cyclically projected and received by the photodiode detector, the corresponding periodic signal was displayed on the oscilloscope, as shown in Fig. 19 (b). By interpreting the square wave with a duty cycle ratio of 67%, we can infer that the projection frame rate was 1300 fps.

4.3 Evaluation of ultra-high frame rate projection

The prototype SP is mainly to validate the feasibility of the quaternary PWM based ultra-high frame rate projection, thereby popularizing the method to the design of SPs. During the HWIL testing, a SP generates simulated scenes and project to the UUT. Then the UUT analyzes the received projection images and transmits feedback data to the flight motion simulator, which will carry the UUT and SP to move together, thus simulating the attitude adjustment of UUT after locking target. In the experiment, we employ an ultra-high frame rate camera instead of the UUT to receive projection images of the SP prototype. The camera can capture images at frame rates ranging from 50 to 100,000 fps per second at various resolutions. As shown in Fig. 20 (a), the camera also should be set to external synchronized mode for receiving the vertical synchronization (VSYNC) signal from DMD-1. The synchronization process is shown in Fig. 20 (b), upon the arrival of each falling edge of the VSYNC, the SP starts projecting four QDs according to the quaternary PWM timing sequence. At the same time, the camera captures four projected QDs within the integration time to generate an 8-bit frame.

 figure: Fig. 20.

Fig. 20. Synchronization between ultra-high frame rate camera and SP. (a) synchronization diagram, (b) synchronization process.

Download Full Size | PDF

Synchronization failure may lead to inadequate superimposition of all QDs, resulting in undesirable effects such as grayscale loss in the image aliasing. As shown in Fig. 21 (a), the four frames captured by the ultra-high frame rate camera all occur grayscale loss caused by aliasing under the unsynchronized condition. Conversely, Fig. 21 (b) exhibits the projection image of the same frame without aliasing in synchronous state. It is important to ultra-high frame rate airborne or spaceborne UUTs that they should have external synchronization function to meet the requirements of the HWIL testing. As for conventional UUTs (with 200 fps) without external synchronization, projecting at a frame rate of over 5 times can completely avoid aliasing, which also greatly expands the application scope of SP.

 figure: Fig. 21.

Fig. 21. Captured images by ultra-high frame rate camera: (a) in unsynchronized state (b) in synchronized state.

Download Full Size | PDF

5. Conclusion

In this paper, we proposed a novel quaternary PWM based SP, which can be used to evaluate imaging systems with ultra-high frame rate in hard-ware-in-the-loop (HWIL) simulation testing. By illuminating two DMDs in parallel with different light intensities, a quaternary SGD that modulates QDs was built up. On the basis of this capability, the QDD based image splitting method is adopted to reduce the number of bit-planes modulated by each DMD to 4. Meanwhile, the quaternary PWM was utilized to control the exposure times of bit-planes, and the base time was compressed to 8 µs by optimizing the timing sequence. Besides, the pixel-level alignment of two DMDs was discussed in detail. The experimental results verified the performance of ultra-high frame rate SP, which can project 8-bit images at 1300 fps. The projection display using quaternary spatial light modulation dramatically improves the grayscale modulation efficiency compared to traditional binary mode, which increases the frame rate by 3 times without losing the brightness. Furthermore, the spatial light modulation is not constrained by the light source and can be easily applied to SP designs in various wavebands.

Funding

111 Project of China (D21009); National Natural Science Foundation of China (61903048); Jilin Scientific and Technological Development Program (20230201052GX).

Acknowledgments

The authors would like to express their gratitude to the anonymous reviewers and editors who worked tirelessly to improve our manuscript.

Disclosures

The authors declare that they have no conflict of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Pan, K. Zhang, M. Hu, et al., “FEA based opto-mechanisms design and thermal analysis of a dynamic SFS with an ultra-long exit pupil distance,” Opt. Laser Technol. 161, 109148 (2023). [CrossRef]  

2. S. S. Kim, B. H. You, H. Choi, et al., “31.1: Invited Paper: World’s First 240 Hz TFT-LCD Technology for Full-HD LCD-TV and Its Application to 3D Display,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 40(1), 424–427 (2009). [CrossRef]  

3. G. Lazarev, S. Bonifer, P. Engel, et al., “High-resolution LCOS microdisplay with sub-kHz frame rate for high performance, high precision 3D sensor,” in Digital Optical Technologies 2017 (SPIE, 2017), 10335, pp. 292–297.

4. J. R. Lippert, H. Wei, H. Yu, et al., “Record breaking high-apparent temperature capability of LCoS-based infrared scene projectors,” in Technologies for Synthetic Environments: Hardware-in-the-Loop Testing XV (SPIE, 2010), 7663, pp. 266–272.

5. T. M. Cantey, G. H. Ballard, and D. A. Gregory, “Application of type II W-quantum-well diode lasers for high-dynamic-temperature-range infrared scene projection,” Opt. Eng 47(8), 086401 (2008). [CrossRef]  

6. J. LaVeigne, G. Franks, and T. Danielson, “Thermal resolution specification in infrared scene projectors,” in Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXVI (SPIE, 2015), 9452, pp. 344–355.

7. Q. Shi, Y. Gao, X. Zhang, et al., “Cryogenic background infrared scene generation method based on a light-driven blackbody micro cavity array,” Infrared Phys. Technol. 117, 103841 (2021). [CrossRef]  

8. Y. Gao, Z. Li, S. Zhang, et al., “Infrared scene projector (IRSP) for cryogenic environments based on a light-driven blackbody micro cavity array (BMCA),” Opt. Express 29(25), 41428–41446 (2021). [CrossRef]  

9. T. Zhao, R. Shi, Z. Li, et al., “Infrared scene projection optical system for blackbody micro cavity array,” Infrared Phys. Technol. 128, 104484 (2023). [CrossRef]  

10. N. B. Hassan, F. Dehkhoda, E. Xie, et al., “Ultrahigh frame rate digital light projector using chip-scale LED-on-CMOS technology,” Photon. Res. 10(10), 2434–2446 (2022). [CrossRef]  

11. S. Weech, S. Kenny, C. M. Calderon, et al., “Limits of subjective and objective vection for ultra-high frame rate visual displays,” Displays 64, 101961 (2020). [CrossRef]  

12. Y. Pan, M. Hu, X. Xu, et al., “Opto-mechanical temperature adaptability analysis of a dual-band IRSP for HWIL simulation,” Infrared Phys. Technol. 105, 103164 (2020). [CrossRef]  

13. D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in CVPR 2011 (2011), pp. 329–336.

14. D. Kumar, S. Raut, K. Shimasaki, et al., “Projection-mapping-based object pointing using a high-frame-rate camera-projector system,” Robomech J 8(1), 8 (2021). [CrossRef]  

15. S. Kagami, “High-speed vision systems and projectors for real-time perception of the world,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops (2010), pp. 100–107.

16. O. Bimber, D. Iwai, G. Wetzstein, et al., “The visual computing of projector-camera systems,” in ACM SIGGRAPH 2008 Classes, SIGGRAPH ‘08 (Association for Computing Machinery, 2008), pp. 1–25.

17. J. Takei, S. Kagami, and K. Hashimoto, “3,000-fps 3-D shape measurement using a high-speed camera-projector system,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (2007), pp. 3211–3216.

18. A. B. Ayoub and D. Psaltis, “High speed, complex wavefront shaping using the digital micro-mirror device,” Sci. Rep. 11(1), 18837 (2021). [CrossRef]  

19. Y. Liu, H. Gao, Q. Gu, et al., “High-Frame-Rate Structured Light 3-D Vision for Fast Moving Objects,” J. Robot. Mechatron. 26(3), 311–320 (2014). [CrossRef]  

20. Y. Zhang and A. R. L. Travis, “DMD-based autostereoscopic display system for 3D interaction,” Electron. Lett. 44(1), 22–24 (2008). [CrossRef]  

21. W. Oshiro, S. Kagami, and K. Hashimoto, “Perception of Motion-Adaptive Color Images Displayed by a High-Speed DMD Projector,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (2019), pp. 1790–1793.

22. J.-H. R. Chang, B. V. K. V. Kumar, and A. C. Sankaranarayanan, “216 shades of gray: high bit-depth projection using light intensity control,” Opt. Express 24(24), 27937–27950 (2016). [CrossRef]  

23. J.-W. Pan and H.-H. Wang, “High contrast ratio prism design in a mini projector,” Appl. Opt. 52(34), 8347–8354 (2013). [CrossRef]  

24. Y.-C. Huang and J.-W. Pan, “High contrast ratio and compact-sized prism for DLP projection system,” Opt. Express 22(14), 17016–17029 (2014). [CrossRef]  

25. G. Damberg, J. Gregson, and W. Heidrich, “High Brightness HDR Projection Using Dynamic Freeform Lensing,” ACM Trans. Graph. 35(3), 1–11 (2016). [CrossRef]  

26. A. Pavlovych and W. Stuerzlinger, “A high-dynamic range projection system,” in Photonic Applications in Biosensing and Imaging (SPIE, 2005), 5969, pp. 636–643.

27. Y. Kusakabe, M. Kanazawa, Y. Nojiri, et al., “A high-dynamic-range and high-resolution projector with dual modulation,” in Color Imaging XIV: Displaying, Processing, Hardcopy, and Applications (SPIE, 2009), 7241, pp. 167–177.

28. G. Damberg, H. Seetzen, G. Ward, et al., “3.2: High Dynamic Range Projection Systems,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 38(1), 4–7 (2007). [CrossRef]  

29. F. Heide, J. Gregson, G. Wetzstein, et al., “Compressive multi-mode superresolution display,” Opt. Express 22(12), 14981–14992 (2014). [CrossRef]  

30. H. Seetzen, W. Heidrich, W. Stuerzlinger, et al., “High Dynamic Range Display Systems,” in Seminal Graphics Papers: Pushing the Boundaries, Volume2, 1st ed., No. 5 (Association for Computing Machinery, 2023), 2, pp. 39–47.

31. M. Yang, W. Xu, Z. Sun, et al., “Mid-wave infrared polarization imaging system for detecting moving scene,” Opt. Lett. 45(20), 5884–5887 (2020). [CrossRef]  

32. M. Xu and H. Hua, “High dynamic range head mounted display based on dual-layer spatial modulation,” Opt. Express 25(19), 23320–23333 (2017). [CrossRef]  

33. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the Seventh IEEE International Conference on Computer Vision (1999), 1, pp. 666–673 vol.1.

34. T. Nguyen, S. W. Chen, S. S. Shivakumar, et al., “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model,” IEEE Robot. Autom. Lett. 3(3), 2346–2353 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1.
Fig. 1. Binary PWM based grayscale modulation process of an 8-bit image
Fig. 2.
Fig. 2. Binary HLM based grayscale modulation process of an 8-bit image
Fig. 3.
Fig. 3. Optical schematic of quaternary SGD
Fig. 4.
Fig. 4. Optical design results of the projection optics. (a) optical transfer function (OTF) curves of all the fields: moduli of the OTF are better than 0.6 at 37 C/mm; (b) distortion curves of wavelengths: distortions are less than 1%. The projection field of view and the projection exit pupil are ± 5.75° and 31 mm respectively. The pixel size of DMD is 13.68 µm.
Fig. 5.
Fig. 5. Process of QDD: (a) digital matrix decomposition (b) visual images of QDs.
Fig. 6.
Fig. 6. Binary decomposition of QD
Fig. 7.
Fig. 7. Binary composition of binary images
Fig. 8.
Fig. 8. QPWM based grayscale modulation process of an 8-bit image
Fig. 9.
Fig. 9. The algorithm of image registration
Fig. 10.
Fig. 10. SIFT feature point detection: global feature points shown on the left, local feature points shown on the right.
Fig. 11.
Fig. 11. The matching effects of FLANN at different thresholds
Fig. 12.
Fig. 12. The effect of RANSAC matching
Fig. 13.
Fig. 13. The matches of feature points in different exposures, scales, and rotations.
Fig. 14.
Fig. 14. Pixel-level alignment of two DMDs: (a) process, (b) before alignment, (c) after alignment.
Fig. 15.
Fig. 15. Alignment results of two DMDs: (a) in X direction (b) in Y direction.
Fig. 16.
Fig. 16. Prototype of SP: (a) opto-mechanical system, (b) experimental setup.
Fig. 17.
Fig. 17. Image comparison of experimental results (a) original image (b) projected image
Fig. 18.
Fig. 18. Grayscale distributions of two images: (a) original image, (b) projected image, (c) residual map expressed in grayscale, (d) residual map expressed in color
Fig. 19.
Fig. 19. Frame rate improvement by optimizing QPWM timing sequence: (a) optimized timing sequence, (b) testing result of frame rate.
Fig. 20.
Fig. 20. Synchronization between ultra-high frame rate camera and SP. (a) synchronization diagram, (b) synchronization process.
Fig. 21.
Fig. 21. Captured images by ultra-high frame rate camera: (a) in unsynchronized state (b) in synchronized state.

Tables (2)

Tables Icon

Table 1. Grayscale values projected by two DMDs in parallel

Tables Icon

Table 2. The match result of feature points

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

G ( x , y ) = i = 1 n / 2 g i ( x , y ) 4 i 1 , g i [ 0 , 1 , 2 , 3 ]
Q ( x , y ) = k t 0 E i = 1 n / 2 g i ( x , y ) 4 i 1 , g i [ 0 , 1 , 2 , 3 ]
Q ( x , y ) DMD - 1 = 2 k t 0 E i 1 = 1 n / 2 g i 1 ( x , y ) 4 i 1 1 = 40 k t 0 E , g i 1 [ 0 , 1 ]
Q ( x , y ) DMD - 2 = k t 0 E i 2 = 1 n / 2 g i 2 ( x , y ) 4 i 2 1 = 68 k t 0 E , g i 2 [ 0 , 1 ]
Q ( x , y ) DMD - 2 = k t 0 E i 2 = 1 n g i 2 ( x , y ) 2 i 2 1 = 108 k t 0 E , g i 2 [ 0 , 1 ]
[ μ ν 1 ] = 1 Z C [ m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 ] [ X w Y w 1 ] = 1 Z C M [ X w Y w 1 ]
[ μ 1 ν 1 1 ] = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] [ μ 2 ν 2 1 ] = H [ μ 2 ν 2 1 ]
[ X w 2 Y w 2 1 ] T = M 2 1 H M 2 [ X w 2 Y w 2 1 ] T
{ d A P = i = 1 64 [ F A ( i ) F P ( i ) ] 2 d A Q = i = 1 64 [ F A ( i ) F Q ( i ) ] 2 , { d B R = i = 1 64 [ F B ( i ) F R ( i ) ] 2 d B S = i = 1 64 [ F B ( i ) F S ( i ) ] 2
R 1 = d A P d A Q , R 2 = d B R d B S
f ( μ 1 i , ν 1 i , μ 2 i , ν 2 i ) = i = 1 n ( μ 2 i h 11 μ 1 i + h 12 ν 1 i + h 13 h 31 μ 1 i + h 32 ν 1 i + 1 ) 2 + ( ν 2 i h 21 μ 1 i + h 22 ν 1 i + h 23 h 31 μ 1 i + h 32 ν 1 i + 1 ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.