Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hybrid holographic Maxwellian near-eye display based on spherical wave and plane wave reconstruction for augmented reality display

Open Access Open Access

Abstract

The holographic Maxwellian display is a promising technique for augmented reality (AR) display because it solves the vergence-accommodation conflict while presenting a high-resolution display. However, conventional holographic Maxwellian display has the inherent trade-off between depth of field (DOF) and image quality. In this paper, two types of holographic Maxwellian displays, the spherical wave type and the plane wave type, are proposed and analyzed. The spherical wavefront and the plane wavefront are produced by a spatial light modulator (SLM) for Maxwellian display. Due to the focusing properties of different wavefronts, the two types of display have complementary DOF ranges. A hybrid approach combining the spherical wavefront and plane wavefront is proposed for a large DOF with high image quality. An optical experiment with AR display is demonstrated to verify the proposed method.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The Maxwellian displays, which are also known as the retinal projection displays (RPD), provide relatively sharp image on the retina regardless of accommodation response of the human eye [1]. It has been a promising technique for augmented reality (AR) applications [2,3] without visual fatigue because it solves the vergence-accommodation conflict (VAC). The conventional Maxwellian displays are based on geometrical optics and lens-based light ray control. Thus, the performance parameters such as the pupil location, the depth of field (DOF) and the image quality are fixed and not flexible. Although time-multiplexing techniques such as a LED array [4], the dynamic tilted mirrors [5] and the Pancharatnam-Berry deflectors [6] can translate the pupil location to expand the eyebox, the total system is complex. Furthermore, the use of lens imaging makes the total system not compact and the lens aberration influences the image quality. The use of holographic optical elements (HOEs) instead of lens is advantageous for compact system and eyebox expansion [713]. However, the recording of the HOE is complex and the HOEs cannot be modified once they are fabricated. Recently, a holographic Maxwellian display based on wave optics was proposed with flexible control of the beam convergent point and beam width, which was useful for eyebox expansion and depth of field extension [14,15]. It is implemented by using a spatial light modulator (SLM) to produce expected wavefront. The system was compact without lens aberration. Although the DOF can be extended using a wavefront filtering method, the image quality is degraded as a result. Thus, despite of many advantages, the conventional holographic Maxwellian display has the inherent trade-off between DOF and image quality.

In this paper, two types of holographic Maxwellian display, the spherical wave type and the plane wave type, are proposed. Through different hologram calculations, different wavefronts, the spherical wavefront and the plane wavefront are produced for Maxwellian display, respectively. The trade-off between DOF and image quality is analyzed in detail. Due to the focusing properties of different wavefronts, the two methods are suitable for different depth ranges. Taking advantage of the capacity of superposition of hologram, a hybrid approach combining spherical wavefront and plane wavefront is proposed for extended DOF with high image quality. The virtual content is divided into two parts according to the expected display depth. For small depth, the corresponding hologram is calculated by the spherical wave-based method. For large depth, the corresponding hologram is calculated by the plane wave-based method. Then, the two holograms are added together to form the final hologram without the trade-off between DOF and image quality.

2. Spherical wave-based holographic Maxwellian display

 Figure 1(a) shows the principle of spherical wave-based holographic Maxwellian display, which is the same as the previous method [14,15]. The complex amplitude distribution U(x,y) on the image plane is obtained by multiplying the target image intensity I(x,y) with a convergent spherical wave phase:

$$U(x,y) = I(x,y) \cdot \exp \left[ {\frac{{ - jk({x^2} + {y^2})}}{{2({z_1} + {z_2})}}} \right],$$
where k = 2π/λ is the wave number, z1 is the distance from the target image to the SLM, and z2 is the distance from the SLM to the pupil plane. Here, the added spherical wave is focused at the pupil plane. Then, the complex amplitude distribution H(x1,y1) on the SLM plane is calculated through a Fresnel diffraction:
$$H({x_1},{y_1}) = \int\!\!\!\int {U(x,y) \cdot \exp \left[ {\frac{{jk[{{(x - {x_1})}^2} + {{(y - {y_1})}^2}]}}{{2{z_1}}}} \right]} dxdy.$$

 figure: Fig. 1.

Fig. 1. (a) Principle of spherical wave-based holographic Maxwellian display. (b) The case of one-point image.

Download Full Size | PDF

To better understand the image formation of spherical wave-based holographic Maxwellian display, the wavefront distribution V(x2,y2) on the pupil plane is further calculated as:

$$V({x_2},{y_2}) = \int\!\!\!\int {U(x,y) \cdot \exp \left[ {\frac{{jk[{{(x - {x_2})}^2} + {{(y - {y_2})}^2}]}}{{2({z_1} + {z_2})}}} \right]} dxdy.$$
Substitute Eq. (1) into Eq. (3), and rewrite it to the form of fast Fourier transform (FFT):
$$\begin{array}{l} V({x_2},{y_2}) = \exp \left[ {\frac{{jk({x_2}^2 + {y_2}^2)}}{{2({z_1} + {z_2})}}} \right] \cdot A\left[ {\frac{{{x_2}}}{{\lambda ({z_1} + {z_2})}},\frac{{{x_2}}}{{\lambda ({z_1} + {z_2})}}} \right],\textrm{ }\\ A({f_x},{f_y}) = FFT[{I(x,y)} ], \end{array}$$
where FFT [] is the fast Fourier transform and A(fx, fy) is the frequency spectrum of I(x, y). Equation (4) shows the wavefront on the pupil plane consists of a divergent spherical wavefront and the frequency spectrum of the target image. Here, one-point image is considered in Fig. 1(b). The point image can be represented by a pulse function $\delta (x - \xi ,y - \eta )$. Substituting it into Eq. (3), the wavefront of the point image on the pupil plane is expressed as:
$$V({x_2},{y_2}) = I(\xi ,\eta )\exp \left[ {\frac{{jk({x_2}^2 + {y_2}^2)}}{{2({z_1} + {z_2})}}} \right] \cdot \exp \left[ {\frac{{jk( - \xi {x_2} - \eta {y_2})}}{{({z_1} + {z_2})}}} \right].$$
Equation (5) indicates that the wavefront of the point image is the combination of a divergent spherical wave and an inclined plane wave. The direction cosine of the plane wave is $(\frac{{ - \xi }}{{{z_1} + {z_2}}},\frac{{ - \eta }}{{{z_1} + {z_2}}})$, and is related to the point coordinate. Figure 1(b) shows the multiplication of the two phase distributions is an off-center spherical wave phase. Thus, in a target image, each point produces a specific spherical wave toward the pupil plane and superimpose to form a spot. That is why it is called the spherical wave-based holographic Maxwellian display. The maximum spot width d is given by the size of effective Fresnel diffraction field:
$$d = \frac{{\lambda ({z_1} + {z_2})}}{{\Delta x}},$$
where Δx is the sampling pitch of the target image. The light rays are only focused at the target image plane, and are out of focus at other depth planes. Thus, the DOF is limited. Note here each image point doesn’t share the same spot size. That is because the image consists of both low-frequency part and high-frequency part. The spread angle of the image point is proportional to sin-1 (λν) where λ and ν are the wavelength and the spatial frequency [16]. Furthermore, since a convergent spherical wave phase is multiplied with the target image, it is equivalent to a spherical illumination, or inclined incidence. In inclined incidence, the resultant spot size is calculated as $2\lambda \nu ({z_1} + {z_2})/\cos \theta$, where θ is the inclined angle.

According to Fig. 2(a), the maximum image spot size p of one pixel of the target image on the retina is given as:

$$p = \left( {\Delta x\frac{l}{{{z_1} + {z_2}}} + \frac{{|{{z_1} + {z_2} - l} |}}{{{z_1} + {z_2}}}d} \right)\frac{{{f_{eye}}}}{l},$$
where l is the focus depth of the eye and feye is the focal length of the eye. It is noted that the spot width d is assumed to be smaller than pupil width. The black solid line in Fig. 2(b) shows the change of p at different focus depths. The parameters are set as: Δx = 0.1 mm, z1=300 mm, z2=150 mm, λ=532 nm, feye=18 mm. Note here the sampling pitch is selected to ensure the spot width d be smaller than common eye pupil width. It is seen that the DOF is limited around the target image plane. Equation (7) indicates that an aperture filtering can be applied at the pupil plane to reduce the spot width d for DOF enhancement. That is, V(x2,y2) can be multiplied by a circular aperture function rect($\sqrt {{x^2} + {y^2}} /r)$, where r is the radius of the circle and rect[] is the rectangular function. Then, the filtered wavefront is back propagated to the SLM plane to form the hologram. Figure 2(b) shows two aperture filtering with 0.5 mm and 0.2 mm aperture sizes. After filtering, the DOF is enhanced a lot. However, the aperture filtering only preserves the low frequency components, while discarding the high frequency components of the target image, which will cause image degradation. It is just the inherent trade-off between DOF and image quality in conventional holographic Maxwellian display.

 figure: Fig. 2.

Fig. 2. (a) The image spot size p of one pixel of the target image on the retina and (b) the change of p at different focus depths in spherical wave-based method.

Download Full Size | PDF

To better show the trade-off problem, numerical simulation for the target image is performed. The parameters are set as: Δx = 16μm, N=1200, z1=300 mm, z2=150 mm, λ=532 nm, where N is the sampling number of the target image. Figure 3(a) shows the numerical reconstruction results at different reconstruction depths and different filtering aperture sizes. The Peak Signal to Noise Ratio (PSNR) is used to assess the image quality [17]. PSNR is commonly used to measure the quality of distorted image compared with original image. PSNR is most easily defined via the mean squared error (MSE). Given a m×n original image I and a distorted image K, MSE is defined as $MSE = \frac{1}{{mn}}\sum\limits_{i = 0}^{m - 1} {\sum\limits_{j = 0}^{n - 1} {[I(i,j) - K(i,j)} } {]^2}.$ The PSNR (in dB) is defined as $PSNR = 10{\log _{10}}(\frac{{MAX_I^2}}{{MSE}}).$ Here,$MA{X_I}$ is the maximum possible pixel value of the image and is equal to 255. The lower the PSNR, the farther the distorted image is to the original one, or the worse the quality of distorted image. At the target image plane, the reconstructed image has the best quality (the highest PSNR). As the reconstruction plane goes farther away from the target image plane, the image quality becomes worse, which reflects the limited DOF. As the filtering aperture size decreases, the DOF is improved, while the image quality around the target image plane is degraded. This trade-off can be seen more clearly in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. (a) The numerical reconstruction results of spherical wave-based method at different reconstruction depths and different filtering aperture sizes. (b) The trade-off between DOF and image quality in spherical wave-based method.

Download Full Size | PDF

Optical experiment with AR display was demonstrated in Fig. 4. Four cartoon models, “rabbit 1”, “rabbit 2”, “Mario” and “mushroom” were located at 0.45 m, 0.6 m, 1.5 m and 2.3 m, respectively. An amplitude type spatial light modulator (SLM) (8 μm pixel pitch, 1920 × 1200 resolution) was used to load the hologram with 1200 × 1200 resolution. The filtering aperture size in the hologram calculation was set as 2.4 mm. An inclined carrier wave was used to interfere with the wavefront distribution on the SLM plane to separate the signal term from the DC term and the conjugated term. An aperture was placed at the pupil plane to stop the DC and conjugate. Figure 4(b) shows the captured pictures at different focusing depths. The virtual image is degraded when focusing on far objects.

 figure: Fig. 4.

Fig. 4. (a). Experimental setup. (b). The captured pictures at different focusing depths in spherical wave-based method.

Download Full Size | PDF

3. Proposed plane wave-based holographic Maxwellian display

 Figure 5(a) shows the principle of proposed plane wave-based holographic Maxwellian display. First, the complex amplitude distribution V(x2,y2) on the pupil plane is calculated through Fourier transforming the target image:

$$V({x_2},{y_2}) = \int\!\!\!\int {I(x,y)\exp \left[ {\frac{{ - j2\pi (x{x_2} + y{y_2})}}{{\lambda ({z_1} + {z_2})}}} \right]} dxdy.$$
Rewrite it to the form of FFT:
$$V({x_2},{y_2}) = A\left[ {\frac{{{x_2}}}{{\lambda ({z_1} + {z_2})}},\frac{{{x_2}}}{{\lambda ({z_1} + {z_2})}}} \right],\textrm{ }A({f_x},{f_y}) = FFT[{I(x,y)} ].$$

 figure: Fig. 5.

Fig. 5. (a) Principle of plane wave-based holographic Maxwellian display. (b) The case of one-point image.

Download Full Size | PDF

After aperture filtering, V(x2,y2) is back propagated to the SLM plane to form the hologram:

$$H({x_1},{y_1}) = \int\!\!\!\int {V({x_2},{y_2}) \cdot \exp \left[ {\frac{{jk[{{({x_2} - {x_1})}^2} + {{({y_2} - {y_1})}^2}]}}{{ - 2{z_2}}}} \right]} d{x_2}d{y_2}.$$
To better understand the physical meaning, a point image is still considered in Fig. 5(b). The point image can be represented by a pulse function $\delta (x - \xi ,y - \eta )$. Substituting it into Eq. (8), the wavefront of the point image on the pupil plane is expressed as:
$$V({x_2},{y_2}) = I(\xi ,\eta ) \cdot \exp \left[ {\frac{{jk( - \xi {x_2} - \eta {y_2})}}{{({z_1} + {z_2})}}} \right].$$
Equation (11) indicates that the wavefront of the point image is an inclined plane wave. Thus, each point produces a specific plane wave toward the pupil plane and superimpose to form a spot. That is why it is called the plane wave-based holographic Maxwellian display. The maximum spot width d is also given by Eq. (6). Different to the spherical wave-based method, the light ray maintains its width if the limited aperture diffraction is ignored, as shown in Fig. 5(b). Note here the light-ray width is determined by the spatial frequency of the object point and can be written as $2\lambda \nu ({z_1} + {z_2})$, where ν is the spatial frequency.

According to Fig. 6(a), the maximum image spot size p of one pixel of the target image on the retina is given as:

$$p = \left( {\Delta x\frac{l}{{{z_1} + {z_2}}} + d} \right)\frac{{{f_{eye}}}}{l}.$$
The black solid line in Fig. 6(b) shows the change of p at different focus depths. The parameters are set the same as those in Fig. 2(b). It is seen that the image spot size decreases as the focus depth increases. Thus, the plane wave-based holographic Maxwellian display will provide good quality image when the eye is focused at far places. Figure 6(b) also shows that the aperture filtering can improve DOF, but at the price of low image quality due to the loss of high frequency components.

 figure: Fig. 6.

Fig. 6. (a) The image spot size p of one pixel of the target image on the retina and (b) the change of p at different focus depths in plane wave-based method.

Download Full Size | PDF

Numerical simulation of the target image is also performed for the plane wave-based method. The parameters are set the same as those in Fig. 3(a). Figure 7(a) shows the numerical reconstruction results at different reconstruction depths and different filtering aperture sizes. The image quality is bad near the target image plane. As the reconstruction plane goes farther away from the target image plane, the image quality becomes better. As the filtering aperture size decreases, the DOF is improved, while the image quality is degraded. This trade-off can be seen more clearly in Fig. 7(b).

 figure: Fig. 7.

Fig. 7. (a) The numerical reconstruction results of plane wave-based method at different reconstruction depths and different filtering aperture sizes. (b) The trade-off between DOF and image quality in plane wave-based method.

Download Full Size | PDF

Optical experiment with AR display was demonstrated in Fig. 8. The filtering aperture size in the hologram calculation was also set as 2.4 mm. Figure 8 shows the captured pictures at different focusing depths. The virtual image is degraded when focusing on near objects, while it is clear when focusing on far objects.

 figure: Fig. 8.

Fig. 8. The captured pictures at different focusing depths in plane wave-based method.

Download Full Size | PDF

4. Proposed hybrid holographic Maxwellian display

So far, it is figured out that the spherical wave-based method and the plane wave-based method are suitable for near vision and far vision, respectively. It is interesting that when l=2(z1+z2), the image spot sizes in Eq. (7) and Eq. (12) are the same. That is, when the focus depth l is smaller than 2(z1+z2), the spherical wave-based method has better image quality. In turn, when the focus depth l is larger than 2(z1+z2), the plane wave-based method has better image quality. It means that the two types of display have complementary DOF range.

In AR display, the virtual content is always displayed to “augment” the corresponding real objects. Different virtual contents are related with different real objects at different depths. Thus, the spherical wave-based method is more suitable for indoor AR applications, in which the object depth is small. While the plane wave-based method is more suitable for outdoor AR applications, in which the object depth is large. However, in a real scene, there are always both near and far objects. At this time, the two methods should be combined to match the virtual contents with near and far objects. Figure 9 shows the principle of the proposed hybrid approach combining spherical wave and plane wave reconstruction. The target image is divided into two parts according to the expected display depth. For small depth (l<2(z1+z2)), the corresponding hologram is calculated using Eqs. (1)-(2). Note that in order to apply aperture filtering, the wavefront in Eq. (1) should be first propagated to the pupil plane. It is then back propagated to the SLM plane after aperture filtering. For large depth (l>2(z1+z2)), the corresponding hologram is calculated using Eqs. (8)-(10). Then, the two holograms are added together to form the final hologram with extended DOF.

 figure: Fig. 9.

Fig. 9. Principle of proposed hybrid holographic Maxwellian display.

Download Full Size | PDF

Optical experiment with AR display was performed to verify the proposed hybrid method. Two words “RABBIT” and “MARIO” were designed to match with near and far objects, respectively. The near objects were two rabbit models located at 0.45 m and 0.6 m. The far objects were the Mario model and the mushroom model located at 1.5 m and 2.3 m. Figure 10 shows the captured photos for the spherical wave-based method, the plane wave-based method and the proposed hybrid method. In the spherical wave-based method, the two words were both clear when the camera was focused at the two near rabbit models. However, they both became blurred when focused at the two far models. The case in the plane wave-based method is just the opposite. Both two methods show limited DOF. While in the proposed hybrid method, the word “RABBIT” was clear when focused at the two near rabbit models, and the word “MARIO” was clear when focused at the two far models. The two words were both clear at expected focus depths to match with the corresponding objects. Thus, the DOF is extended a lot in the proposed hybrid method without image quality degradation. The small field of view can be improved using a tunable liquid lens [18].

 figure: Fig. 10.

Fig. 10. The captured pictures at different focusing depths in spherical wave-based method, plane wave-based method and proposed hybrid method.

Download Full Size | PDF

5. Conclusion

In conclusion, the trade-off between DOF and image quality in conventional holographic Maxwellian display is solved in the proposed hybrid approach. The different performances of the two types of holographic Maxwellian display are studied first. They show good performances at complementary DOF ranges. By jointly using spherical wave reconstruction and plane wave reconstruction, different virtual contents are presented sharply within different depth ranges to match with the corresponding real objects. Optical experiment with AR display verifies that the DOF is extended in the proposed hybrid method without image quality degradation. The proposed method is promising for near-eye AR display with large DOF and high image quality.

Funding

National Natural Science Foundation of China (61805065).

Disclosures

The authors declare no conflicts of interest.

References

1. G. Westheimer, “The Maxwellian view,” Vision Res. 6(11-12), 669–682 (1966). [CrossRef]  

2. M. Inami, N. Kawakami, T. Maeda, Y. Yanagida, and S. Tachi, “A stereoscopic display with large field of view using Maxwellian optics,” in Proceedings of Int. Conf. Artificial Reality and Tele-Existence 97, 71–76 (1997).

3. Z. He, X. Sui, G. Jin, and L. Cao, “Progress in virtual reality and augmented reality based on holographic display,” Appl. Opt. 58(5), A74–A81 (2019). [CrossRef]  

4. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36(6), 1–13 (2017). [CrossRef]  

5. M. K. Hedili, B. Soner, E. Ulusoy, and H. Urey, “Light-efcient augmented reality display with steerable eyebox,” Opt. Express 27(9), 12572–12581 (2019). [CrossRef]  

6. T. Lin, T. Zhan, J. Zou, F. Fan, and S.-T. Wu, “Maxwellian near-eye display with an expanded eyebox,” Opt. Express 28(26), 38616–38625 (2020). [CrossRef]  

7. S. B. Kim and J. H. Park, “Optical see-through Maxwellian near-to-eye display with an enlarged eyebox,” Opt. Lett. 43(4), 767–770 (2018). [CrossRef]  

8. J. H. Park and S. B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of feld control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

9. C. Jang, K. Bang, G. Li, and B. Lee, “Holographic near-eye display with expanded eye-box,” ACM Trans. Graph. 37(6), 1–14 (2019). [CrossRef]  

10. C. Yoo, M. Chae, S. Moon, and B. Lee, “Retinal projection type lightguide-based near-eye display with switchable viewpoints,” Opt. Express 28(3), 3116–3135 (2020). [CrossRef]  

11. M. H. Choi, Y. G. Ju, and J. H. Park, “Holographic near-eye display with continuously expanded eyebox using two-dimensional replication and angular spectrum wrapping,” Opt. Express 28(1), 533–547 (2020). [CrossRef]  

12. L. Mi, C. P. Chen, Y. Lu, W. Zhang, J. Chen, and N. Maitlo, “Design of lensless retinal scanning display with diffractive optical element,” Opt. Express 27(15), 20493–20507 (2019). [CrossRef]  

13. J. S. Lee, Y. K. Kim, and Y. H. Won, “See-through display combined with holographic display and Maxwellian display using switchable holographic optical element based on liquid lens,” Opt. Express 26(15), 19341–19355 (2018). [CrossRef]  

14. Y. Takaki and N. Fujimoto, “Flexible retinal image formation by holographic Maxwellian-view display,” Opt. Express 26(18), 22985–22999 (2018). [CrossRef]  

15. C. Chang, W. Cui, J. Park, and L. Gao, “Computational holographic Maxwellian near-eye display with an expanded eyebox,” Sci. Rep. 9(1), 1–9 (2019). [CrossRef]  

16. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015). [CrossRef]  

17. Q. Huynh-Thu and M. Ghanbari, “Scope of validity of PSNR in image/video quality assessment,” Electron. Lett. 44(13), 800–801 (2008). [CrossRef]  

18. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. H. Wang, “Holographic capture and projection system of real object based on tunable zoom lens,” PhotoniX 1(1), 6–15 (2020). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) Principle of spherical wave-based holographic Maxwellian display. (b) The case of one-point image.
Fig. 2.
Fig. 2. (a) The image spot size p of one pixel of the target image on the retina and (b) the change of p at different focus depths in spherical wave-based method.
Fig. 3.
Fig. 3. (a) The numerical reconstruction results of spherical wave-based method at different reconstruction depths and different filtering aperture sizes. (b) The trade-off between DOF and image quality in spherical wave-based method.
Fig. 4.
Fig. 4. (a). Experimental setup. (b). The captured pictures at different focusing depths in spherical wave-based method.
Fig. 5.
Fig. 5. (a) Principle of plane wave-based holographic Maxwellian display. (b) The case of one-point image.
Fig. 6.
Fig. 6. (a) The image spot size p of one pixel of the target image on the retina and (b) the change of p at different focus depths in plane wave-based method.
Fig. 7.
Fig. 7. (a) The numerical reconstruction results of plane wave-based method at different reconstruction depths and different filtering aperture sizes. (b) The trade-off between DOF and image quality in plane wave-based method.
Fig. 8.
Fig. 8. The captured pictures at different focusing depths in plane wave-based method.
Fig. 9.
Fig. 9. Principle of proposed hybrid holographic Maxwellian display.
Fig. 10.
Fig. 10. The captured pictures at different focusing depths in spherical wave-based method, plane wave-based method and proposed hybrid method.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y ) = I ( x , y ) exp [ j k ( x 2 + y 2 ) 2 ( z 1 + z 2 ) ] ,
H ( x 1 , y 1 ) = U ( x , y ) exp [ j k [ ( x x 1 ) 2 + ( y y 1 ) 2 ] 2 z 1 ] d x d y .
V ( x 2 , y 2 ) = U ( x , y ) exp [ j k [ ( x x 2 ) 2 + ( y y 2 ) 2 ] 2 ( z 1 + z 2 ) ] d x d y .
V ( x 2 , y 2 ) = exp [ j k ( x 2 2 + y 2 2 ) 2 ( z 1 + z 2 ) ] A [ x 2 λ ( z 1 + z 2 ) , x 2 λ ( z 1 + z 2 ) ] ,   A ( f x , f y ) = F F T [ I ( x , y ) ] ,
V ( x 2 , y 2 ) = I ( ξ , η ) exp [ j k ( x 2 2 + y 2 2 ) 2 ( z 1 + z 2 ) ] exp [ j k ( ξ x 2 η y 2 ) ( z 1 + z 2 ) ] .
d = λ ( z 1 + z 2 ) Δ x ,
p = ( Δ x l z 1 + z 2 + | z 1 + z 2 l | z 1 + z 2 d ) f e y e l ,
V ( x 2 , y 2 ) = I ( x , y ) exp [ j 2 π ( x x 2 + y y 2 ) λ ( z 1 + z 2 ) ] d x d y .
V ( x 2 , y 2 ) = A [ x 2 λ ( z 1 + z 2 ) , x 2 λ ( z 1 + z 2 ) ] ,   A ( f x , f y ) = F F T [ I ( x , y ) ] .
H ( x 1 , y 1 ) = V ( x 2 , y 2 ) exp [ j k [ ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 ] 2 z 2 ] d x 2 d y 2 .
V ( x 2 , y 2 ) = I ( ξ , η ) exp [ j k ( ξ x 2 η y 2 ) ( z 1 + z 2 ) ] .
p = ( Δ x l z 1 + z 2 + d ) f e y e l .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.