Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Realization of undistorted volumetric multiview image with multilayered integral imaging

Open Access Open Access

Abstract

This paper presents a 3D display based on the coarse integral volumetric imaging (CIVI) technique. Though expression of focal effect and specular light is enabled by combining volumetric and multiview solutions, the image qualities of conventional systems have stayed low. In this paper high quality 3D image is attained with the CIVI technology, which compensates distortion and discontinuity of image based on the optical calculations. In addition, compact system design by layering color and monochrome panels is proposed.

©2011 Optical Society of America

1. Introduction

Integral imaging [1], which combines a fly-eye lens sheet with a high resolution display panel, is a prominent 3D display system in the sense that it can show not only horizontal parallax but also vertical parallax. In the conventional integral imaging, the number of pixels each elemental lens covers is usually the same as the number of views, which means that the viewer perceives each elemental lens as a single pixel. Therefore the focus of viewer’s eyes is always fixed on the lens sheet, which makes it hard to show realistic images far beyond the screen or popping up from the screen.

Besides the orthodox integral imaging described above, we can also think of an integral imaging system where each elemental lens is large enough to cover pixels dozens of times more than the number of views [24]. Kakeya defined this type of integral imaging as coarse integral imaging (CII) [5]. The advantage of CII is that it can induce focal accommodation off the screen, for it generates a real image or a virtual image with the lenses. Thus the viewer can perceive realistic images far beyond the screen or popping up from the screen. CII, however, cannot overcome the problem of vergence-accommodation conflict because the eyes of the viewer are always focusing on the real image or the virtual image generated by the lenses.

One way to solve this problem is to combine volumetric solution with multiview solution. Conventional volumetric displays can solve the problem of vergence-accommodation conflict, while they cannot express occlusion or gloss of the objects in the scene. Multiview displays can express the latter while it cannot solve the former. By combining them, it is expected that both problems are solved at the same time.

The first proposal of combining multiview and volumetric solutions was given by Lee et al. based on the integral imaging scheme [2]. Besides integral imaging, Cossairt et al. proposed a volumetric multiview display by rotating a screen with limited viewing angle [6]. These technologies, however, include moving parts and cannot be realized with low costs.

To achieve combination of volumetric and multiview displays without moving parts, Yasui et al. and Ebisu et al. proposed use of multilayer panels so that they overlap with the projected multiview real image floating in the air [7, 8]. In these systems multilayer panels are only in charge of depicting edges, while color information is given by the multiview display. Since the optical distortion of generated real image is corrected, the presented image is free from distortion. The image quality, however, stays poor because of the limited number of volumetric layers due to limited transmittance of panels, which causes separation of images from different depths when the image is viewed from side angles. Also occlusion of edges cannot be expressed by this method, which makes the presented image unnatural.

To tackle the same problem, Kim et al. proposed integral imaging systems with multilayer screens [9, 10]. In these systems multilayer translucent screens (LCD panels or PDLC screens) are used in place of a single display panel behind the fly-eye lens sheet. These systems, however, have several problems. Firstly the images from adjacent elemental lenses are not smoothly connected because the distortion of image due to refraction is not taken into account. Secondly these systems cannot show objects with large depth because connectivity of the images from different depths is not considered. Therefore these systems, though multilayered, do not meet up with the requirement of volumetric displays for general use.

To solve the problem of connectivity between adjacent elemental images and adjacent layered panels, Kakeya proposed coarse integral volumetric imaging (CIVI), where depth-fused 3D (DFD) for curved image planes is applied to realize smooth image connection [11, 12]. In this paper the author gives detailed analysis on DFD for curved image planes to realize CIVI with little distortion and smooth connectivity of image. In addition, compact system design by layering color and monochrome panels, which can attain bright image with large depth, is proposed.

This paper is organized as follows. In Section 2 the principle and the optical features of CIVI are introduced and DFD for curved image plane is explained. In Section 3 optical analysis of the proposed system is given based on the numerical calculation of geometrical optics. In Section 4 we introduce a couple of experimental systems to evaluate the proposed method and discuss the possibility of further improvement. In Section 5 we conclude this paper.

2. Coarse integral volumetric imaging

In the simplest multilayer integral imaging, real images (floating images) generated by the elemental lenses are assumed to be flat as shown in Fig. 1 (a) . This assumption, however, is a poor approximation even when the image size and the viewing angle are strictly limited. It gives the least distortion in the case when we use a large aperture Fresnel lens, which is used to converge light collimated by the fly-eye lenses [13]. To collimate the light emitted from the display panel, the distance between the fly-eye lens sheet and the display panel has to be equal to the focal distance of the elemental lenses of the fly-eye lens sheet. Even in this configuration, however, linear approximation is so rough that the observed image becomes discontinuous at the edges of the elemental lenses as shown in Fig. 1 (b).

 figure: Fig. 1

Fig. 1 Simple multilayer integral imaging: (a) Design; (b) Observed image.

Download Full Size | PDF

The actual floating image generated by the lenses is not flat, but curved and distorted because of the field curvature and the barrel distortion, each of which is well-known as one of the five Seidel aberrations. Therefore we should render the elemental images so that these aberrations may be compensated to show images connected smoothly on the edges of elemental lenses.

The other point we have to take into account is the connection between adjacent panels. In the traditional volumetric displays, the interval between layered panels has to be narrow to prevent split of the image when viewed from the side angles. Therefore many panels should be used to show deep space, which makes the image dark and obscure. When volumetric solution is combined with integral imaging, however, the limited viewing zone of each elemental image prevents split of the layered images. To connect images from distant layers smoothly, we can apply DFD approach, where 3D pixels are expressed with two adjacent panels, each of which emit light in inverse proportion to the distance between the 3D pixels and the panel [14, 15]. By using DFD we can expect realization of natural continuity of depth.

Since the generated layered images are not flat but curved, DFD algorithm here should take the field curvature into account to realize natural connections between elemental images. To be concrete, each 3D pixel has to be drawn on the two adjacent curved image planes so that the intensity may be in inverse proportion to the distance to each curved plane. This method should be applied to each elemental image by modifying the parameters of curved planes so that each pixel is shown at a consistent 3D position from different viewpoints. We call this multilayered integral imaging with DFD for curved image planes coarse integral volumetric imaging (CIVI). Figure 2 shows the basic principle of CIVI. Elemental images are the images observed from different angles and the elemental volumetric image is generated by drawing each pixel on the panel of corresponding depth based on DFD for curved image planes. When the 3D pixel is located between the curved real image planes of panels A and B, the color component (r, g, b) of that pixel is drawn with the intensity

(RA,GA,BA)=dB(r,g,b)/(dA+dB)
on panel A and
(RB,GB,BB)=dA(r,g,b)/(dA+dB)
on panel B, where dA is the distance between the curved real image of panel A and the 3D pixel and dB is the distance between the curved real image of panel B and the 3D pixel.

 figure: Fig. 2

Fig. 2 Principle of coarse integral volumetric imaging (CIVI): (a) layering multiple color panels; (b) layering a color panel and monochrome panels.

Download Full Size | PDF

To realize CIVI shown in Fig. 2 (a), however, color display panels with high transmittance are required, which are not commercially available at present. One way to attain multilayer integral imaging with high transmittance by the current technology is to layer a color panel and multiple monochrome panels to emulate color volumetric display as shown in Fig. 2 (b). The feature of this system is that it realizes color volumetric image by combining a color display panel and monochrome display panels. If many color LCD panels are layered to realize volumetric display, the image has to be extremely dark because color filters are included in each display panel, where the brightness is reduced to less than one third. Since monochrome display panels do not include color filters, reduction of brightness can be kept small compared with color panels.

By replacing some of the color panels with monochrome panels, part of the image information is lost. The main purpose of multilayer structure, however, is to solve the problem of vergence-accommodation conflict. Fortunately our eyes’ focus is much more sensitive to the contrast of image than the color of image. Therefore monochrome volumetric structure is expected to give sufficient depth effect to induce natural focal accommodation.

To express volumetric image with a color panel and monochrome panels, the scheme of DFD should be revised also. First we decompose the image to be presented into color component and contrast component. Then the color component is depicted on the color panel, while the contrast component is depicted on the monochrome panels that correspond to the depth of each 3D pixel. This method and the standard DFD for curved image planes are compared in Fig. 3 .

 figure: Fig. 3

Fig. 3 DFD for curved image plane: (a) 3D pixel expression with multiple color panels; (b) 3D pixel expression with a color panel and multiple monochrome panels.

Download Full Size | PDF

The detail of the proposed algorithm is as follows. First for each pixel

(R,G,B)=L(r,g,b)/M
is calculated and depicted on the color panel, where (r, g, b) is the original color, M = max(r, g, b), and L is the maximum intensity of each color component that the panel can show. Here the original pixel color is reproduced if we reduce the intensity of pixel to M/L times. Variation of intensity reduction for each pixel can show the contrast of image, which induces focal accommodation of the eyes. Therefore the intensity should be reduced on the panel corresponding to the depth of each pixel. Here we apply DFD-like approach to express the depth between discrete layers. To be concrete we calculate
αβ=M/L,
(1α):(1β)=dB:dA,
and the intensity is reduced to α times at panel A and to β times at panel B, where dA is the distance between panel A and the 3D pixel to be presented and dB is the distance between panel B and the 3D pixel.

3. Optical analysis

To put DFD for curved planes into practice, geometry of distortion has to be obtained beforehand. It is known that field curvature can be approximated to be proportional to the square of the field size, which means that the field curvature has a parabolic shape. Concrete field curvature can be obtained by numerical calculation of 3D optical paths based on Snell’s law of refraction. In this paper, we introduce 3D light ray analysis in place of 2D analysis in the prior work [12] to evaluate the field curvature and the barrel distortion precisely.

Refracted light ray q = (qx, qy, qz) in the 3D optical simulation is given by

qx=(pxsnx)/N,
qy=(pysny)/N,
qz=(pzsnz)/N,
s=t+t2+N21,
t=pxnx+pyny+pznz,
N=Nq/Np,
where p=(px, py, pz) is the incident light ray and n=(nx, ny, nz) is the normal vector of the lens surface. Np and Nq are the refraction rates of the material before and after refraction respectively. Here note that p, q, and n should be unit vectors in the above calculation. The directions of vectors p, q, n are illustrated in Fig. 4 .

 figure: Fig. 4

Fig. 4 Directions of vectors p, q, n.

Download Full Size | PDF

Since we use Fresnel lenses for elemental lenses and the large aperture lens, we can ignore the thickness of lenses. The Fresnel lens we use has a focal point on the flat side of the lens as shown in Fig. 5 . This type of Fresnel lens has the surface curvature whose normal vector is given by

nx=x/x2+y2+f2,
ny=y/x2+y2+f2,
nz=1+N2(x2+y2)/(x2+y2+f2)
at (x, y), where (0, 0) is the center of the lens and f is the focal distance of the lens.

 figure: Fig. 5

Fig. 5 Type of Fresnel lens used in the simulation and the experiment.

Download Full Size | PDF

Since generalized discussion is difficult because we depend on numerical calculation, here we take up an example with specific optical parameters and explain how DFD for curved planes is attained in that case. The optical parameters in our example are as follows:

  • • Size of elemental images and elemental lenses: 40 mm × 40 mm,
  • • Focal distance of elemental lenses: 140 mm,
  • • Focal distance of large aperture lens: 275 mm,
  • • Distance between LCD panels and fly-eye lens sheet: 140 mm, 148 mm
  • • Distance between fly-eye lens sheet and large aperture lens: 275 mm,

We keep the distance between the LCD panel and the fly-eye lens sheet near the focal distance of the elemental lenses to avoid excessive distortion. Since we use Fresnel lenses made of acryl, the refraction rate of lens material is fixed to 1.492. Here color aberration is not taken into account for simplicity.

Figure 6 shows the results of optical simulation with the above parameters. Here the position of real image for each pixel is defined as the intersection of rays averaged over the combination of rays from the same pixel. As the figure shows, the curvature of real image can be approximated to be parabolic as expected, though the parabolic approximation becomes less accurate for the peripheral part of image and for the views far from the optical axis. Since parabolic approximation holds for the central part of elemental images near the center of the optical axis of large aperture lens, we can use this approximation as long as the size of image is not large and the viewing angle of display is not wide.

 figure: Fig. 6

Fig. 6 Results of optical simulations calculating distortions of elemental images whose centers are at (a) (2,0,0), (b) (6,0,0), (c) (2,4,0), (d) (6,4,0) in xyz coordinate respectively.

Download Full Size | PDF

Here it should be noted that the parabolic real images generated by different elemental lenses do not completely overlap with one another. The parabolic plane for each elemental image is tilted toward the focal point of the large aperture lens as shown in Fig. 6. When we use uvw coordinate shown in Fig. 7 in place of xyz coordinate, the inclination of image disappears as shown in Fig. 8 . Here w axis goes through the focal point of large aperture lens and the point where optical axis of the elemental lens intersects with the large aperture lens, while u and v axes are orthogonal to w axis.

 figure: Fig. 7

Fig. 7 uvw coordinate to simplify geometry of optical distortion.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 3D distortions (in uvw coordinate) of elemental images whose centers are at (a) (2,0,0), (b) (6,0,0), (c) (2,4,0), (d) (6,4,0) in xyz coordinate.

Download Full Size | PDF

In uvw coordinate, the extent of parabolic distortion in the direction of w axis can be approximated to be almost constant in the central part of image regardless of the difference of viewpoint. Also in uvw coordinate, distortion in uv plane (barrel distortion) is negligible in the central part of image as shown in Fig. 9 . In addition, the size of the real image generated from different layers can be assumed to be almost constant in this simulation, where the distance between fly-eye lens and the large aperture lens is almost the same as the focal distance of large aperture lens.

 figure: Fig. 9

Fig. 9 2D distortions (in uv coordinate) of elemental images (lattice points) whose centers are at (a) (2,0,0), (b) (6,0,0), (c) (2,4,0), (d) (6,4,0) in xyz coordinate. Bright squares and dark diamonds show the lattice points on the front panel and the back panel respectively.

Download Full Size | PDF

Based on the above discussion, the procedure to show a computer graphic model on the CIVI display with the above optical settings goes as follows:

  • 1. Based on the result of optical simulation, the distortion of image for each elemental image of every depth is approximated with the equation w = a (u2 + v2) + c.
  • 2. By orthogonal projection in the direction of w axis for each elemental image, 3D space is transformed into 2D image with depth buffer values.
  • 3. For each elemental image, we apply DFD for distorted image plane calculated in Step 1 by using the depth value of each pixel obtained in Step 2.

Here it should be noted that the extent of parabolic distortion cannot be approximated to be constant when the layers of real image are distant from one another. Also distortion in uv plane becomes larger as the viewing angle becomes wider. Distortion in uv plane can be corrected by using texture mapping technique when the distortions of elemental images are large [16, 17].

4. Experimental prototypes

To confirm the optical analysis discussed in the previous section, we made two prototype systems. First we made a prototype system using a half mirror to combine light from two LCD layers. The prototype has the same optical parameters with the simulation in the previous section. The size of the large aperture lens is 210 mm × 170 mm. As for the LCD panels, a pair of 8 inch SVGA panels (Century LCD-8000U) are used, which can show 12 elemental images whose resolution is 200 × 200.

Figure 10 shows the picture of the first prototype system, the elemental images to be displayed on each panel, and the images observed from two different viewpoints. Basically the front part of object is shown on the front panel, while the back part of object is shown on the panel behind. The pixels in between are depicted on both panels with reduced intensity based on DFD. Here it should be noted that the peripheral part of object is depicted on the nearer panel than the central part of object at the same depth to compensate the parabolic distortion.

 figure: Fig. 10

Fig. 10 Hardware, elemental images, and observed images of the first prototype (Media 1).

Download Full Size | PDF

Though the observed images with the prototype shown in Fig. 10 include multiple elemental images, the boundary of elemental images is not distinct, which means that smooth connection of images is realized as expected.

Here the seams of elemental lens array (fly-eye lens sheet) are blurred and indistinct because the large aperture lens is distant from the lens array. To make the system compact, however, it is preferable to keep this distant short, which makes the seams more distinct. To avoid it, we have to make a fly-eye lens sheet with indistinct seams. Also it should be noted that the size of the real image from each layer becomes different when the distant between the large aperture lens and the fly-eye lens sheet is not the same as the focal distance of the large lens. In this case perspective projection has to be used in place of orthogonal projection to make a 2D image with depth buffer values.

The second prototype is based on the layered structure of a color panel and two monochrome panels to attain bright images with compact system design as discussed in Section 2. The optical parameters of the second prototype system are as follows:

  • • Size of elemental images and elemental lenses: hexagon with 15 mm edges,
  • • Focal distance of elemental lenses: 90 mm,
  • • Number of elemental images and elemental lenses: 76,
  • • Size of large aperture lens: 400 mm × 300 mm,
  • • Focal distance of large aperture lens: 500 mm,
  • • Distance between LCD panels and fly-eye lens sheet: 88 mm, 90 mm, 92 mm,
  • • Distance between fly-eye lens sheet and large aperture lens: 500 mm.

Figure 11 shows the picture of the prototype system that layers a color LCD panel and two monochrome panels. Here 18.1 inch SXGA IPS panels are used. As for the color panel, a LC panel of NANAO L695 is used and LC panels of TOTOKU ME181L are used for the monochrome panels. Here the antiglare films of monochrome panels are removed to make the layered image clear.

 figure: Fig. 11

Fig. 11 Hardware, elemental images, and observed images of the second prototype (Media 2).

Download Full Size | PDF

The system size can be kept compact regardless of the increased numbers of elemental images and layers. The quality of the realized image, however, is not as high as that of the half mirror system, because the aperture of pixels on IPS panels is limited, which results in emergence of moiré pattern. Farther research is required to attain better quality of image. One possibility for improvement is to use TN panels, which have larger apertures and higher transmittance.

As discussed on the first prototype, we can narrow the distance between the large aperture lens and the lens sheet if the elemental lenses can be connected without distinct seam. If we can connect the large aperture lens with the fly-eye lens sheet, the thickness of multilayer display system can be as narrow as about 120 mm or less.

When the problem of moiré and distinct seam of lenses can be both solved, the proposed method can be a practical solution to show high quality 3D image combining multiview and volumetric technologies.

5. Conclusion

This paper proposes the CIVI displays that can attain high quality 3D image by compensating the distortion of image based on the optical calculation. Based on the optical discussion we realize two experimental display systems. The prototype system using a half mirror to merge images from two panels at different depths attains high quality 3D image. The second system layering color and monochrome panels attains 3D image with more panels and depth with a compact system design, though it bears moiré pattern because of the limited pixel aperture of LC panels. When the problem of moiré and distinct seam of lenses can be both solved with the development of more translucent display panels and seamless lens array, the proposed method can be a practical solution to show vivid and natural 3D images without vergence-accommodation conflict.

References and links

1. G. Lippmann, “La photograhie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

2. B. Lee, S. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001). [CrossRef]   [PubMed]  

3. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11(16), 1862–1875 (2003). [CrossRef]   [PubMed]  

4. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42(20), 4186–4195 (2003). [CrossRef]   [PubMed]  

5. H. Kakeya, “Coarse integral imaging and its applications,” Proc. SPIE 6803, 680317 (2008). [CrossRef]  

6. O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46(8), 1244–1250 (2007). [CrossRef]   [PubMed]  

7. R. Yasui, I. Matsuda, and H. Kakeya, “Combining volumetric edge display and multiview display for expression of natural 3D images,” Proc. SPIE 6055, 60550Y (2006). [CrossRef]  

8. H. Ebisu, T. Kimura, and H. Kakeya, “Realization of electronic 3D display combining multiview and volumentric solutions,” Proc. SPIE 6490, 64900Y (2007). [CrossRef]  

9. Y. Kim, J.-H. Park, H. Choi, J. Kim, S.-W. Cho, and B. Lee, “Depth-enhanced three-dimensional integral imaging by use of multilayered display devices,” Appl. Opt. 45(18), 4334–4343 (2006). [CrossRef]   [PubMed]  

10. Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46(18), 3766–3773 (2007). [CrossRef]   [PubMed]  

11. H. Kakeya, “MOEVision: simple multiview display with clear floating image,” Proc. SPIE 6490, 64900J (2007).

12. H. Kakeya, “Improving image quality of coarse integral volumetric display,” Proc. SPIE 7237, 723726 (2009). [CrossRef]  

13. H. Kakeya, T. Kurokawa, and Y. Mano, “Electronic realization of coarse integral volumetric imaging with wide viewing angle,” Proc. SPIE 7524, 752411 (2010). [CrossRef]  

14. S. Suyama, H. Takada, K. Uehira, S. Sakai, and S. Ohtsuka, “A Novel Direct-Vision 3-D Display using Luminance-Modulated Two 2-D Images Displayed at Different Depths,” in SID Symposium Digest of Technical Papers, vol. 31 (2000), pp. 1208–1211.

15. S. Suyama, H. Takada, and S. Ohtsuka, “A Direct-Vision 3-D Display Using a New Depth-fusing Perceptual Phenomenon in 2-D Displays with Different Depths,” IEICE Trans. Electron. E85-C, 1911–1915 (2002).

16. H. Kakeya and Y. Arakawa, “Autostereoscopic Display with Real-image Virtual Screen and Light Filters,” Proc. SPIE 4660, 349–357 (2002). [CrossRef]  

17. H. Kakeya, “Real-Image-Based Autostereoscopic Display Using LCD, Mirrors, and Lenses,” Proc. SPIE 5006, 98–108 (2003).

Supplementary Material (2)

Media 1: MPG (1262 KB)     
Media 2: MPG (1344 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Simple multilayer integral imaging: (a) Design; (b) Observed image.
Fig. 2
Fig. 2 Principle of coarse integral volumetric imaging (CIVI): (a) layering multiple color panels; (b) layering a color panel and monochrome panels.
Fig. 3
Fig. 3 DFD for curved image plane: (a) 3D pixel expression with multiple color panels; (b) 3D pixel expression with a color panel and multiple monochrome panels.
Fig. 4
Fig. 4 Directions of vectors p, q, n.
Fig. 5
Fig. 5 Type of Fresnel lens used in the simulation and the experiment.
Fig. 6
Fig. 6 Results of optical simulations calculating distortions of elemental images whose centers are at (a) (2,0,0), (b) (6,0,0), (c) (2,4,0), (d) (6,4,0) in xyz coordinate respectively.
Fig. 7
Fig. 7 uvw coordinate to simplify geometry of optical distortion.
Fig. 8
Fig. 8 3D distortions (in uvw coordinate) of elemental images whose centers are at (a) (2,0,0), (b) (6,0,0), (c) (2,4,0), (d) (6,4,0) in xyz coordinate.
Fig. 9
Fig. 9 2D distortions (in uv coordinate) of elemental images (lattice points) whose centers are at (a) (2,0,0), (b) (6,0,0), (c) (2,4,0), (d) (6,4,0) in xyz coordinate. Bright squares and dark diamonds show the lattice points on the front panel and the back panel respectively.
Fig. 10
Fig. 10 Hardware, elemental images, and observed images of the first prototype (Media 1).
Fig. 11
Fig. 11 Hardware, elemental images, and observed images of the second prototype (Media 2).

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

( R A , G A , B A )= d B (r,g,b)/( d A + d B )
( R B , G B , B B )= d A (r,g,b)/( d A + d B )
(R,G,B)=L(r,g,b)/M
αβ=M/L,
(1α):(1β)= d B : d A ,
q x =(p x s n x )/N,
q y =(p y s n y )/N,
q z =(p z s n z )/N,
s=t+ t 2 + N 2 1 ,
t= p x n x + p y n y + p z n z ,
N= N q / N p ,
n x =x/ x 2 + y 2 + f 2 ,
n y =y/ x 2 + y 2 + f 2 ,
n z =1+ N 2 ( x 2 + y 2 )/( x 2 + y 2 + f 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.