Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing

Open Access Open Access

Abstract

In the computational three-dimensional (3D) volumetric reconstruction integral imaging (II) system, volume pixels of the scene are reconstructed by superimposing the inversely mapped elemental images through a computationally simulated optical reconstruction process according to ray optics. Close placement of a 3D object to the lenslet array in the pickup process may result in significant variation in intensity between the adjacent pixels of the reconstructed image, degrading the quality of the image. The intensity differences result from the different number of the superimposed elemental images used for reconstructing the corresponding pixels. In this paper, we propose improvements of the reconstructed image quality in two ways using 1) normalized computational 3D volumetric reconstruction II, and 2) hybrid moving lenslet array technique (MALT). To reduce the intensity irregularities between the pixels, we normalize the intensities of the reconstructed image pixels by the overlapping numbers of the inversely mapped elemental images. To capture the elemental image sets for the MALT process, a stationary 3D object pickup process is performed repeatedly at various locations of the pickup lenslet array’s focal plane, which is perpendicular to the optical axis. With MALT, we are able to enhance the quality of the reconstructed images by increasing the sampling rate. We present experimental results of volume pixel reconstruction to test and verify the performance of the proposed reconstruction algorithm. We have shown that substantial improvement in the visual quality of the 3D reconstruction is obtained using the proposed technique.

©2004 Optical Society of America

1. Introduction

The search for optimum 3D imaging and visualization techniques has been the subject of research for many years [1–23]. However, it has been only recently when technology has approached the level required for practical realization of 3D imaging systems. Among three-dimensional (3D) imaging techniques, integral imaging (II) system [4, 10–11, 14–23] is a promising technology which uses a microlens array or a pinhole array to capture light rays emanating from 3D objects (see Fig. 1). The light rays that pass through each pickup microlens or pinhole array are captured on a two-dimensional (2D) image sensor and recorded. The captured 2D image arrays are referred to as elemental images. The elemental images are flipped demagnified 2D images with their own perspectives of a 3D object. To reconstruct the 3D scene from the 2D elemental images, the rays coming from the elemental images are reversely propagated through a display microlens array that is similar to the pickup microlens array. With this process, the original 3D scene is displayed where it has been originally located.

To reconstruct the 3D scene using conventional optical II, we need to display elemental images on a 2D display panel such as a liquid crystal display (LCD) or a spatial light modulator (SLM) with a display microlens array [17, 21]. To overcome image quality degradation introduced by optical devices used in an optical II reconstruction process, and to have the arbitrary viewing angle within the total viewing angle, computational II reconstruction techniques [14–15, 18, 23] have been proposed. However, it is not possible to obtain full 3D volume information with a computational reconstruction algorithm [14], which reconstructs a single specific viewpoint of the 3D object through a microlens array. A computational reconstruction algorithm that uses triangulation and limited segments of the elemental images [15] results in resolution degradation of the reconstructed image. Resolution [18–22] improvement techniques can be applied to the computational II to obtain higher resolution images [18]. However, the reconstructed higher resolution image is still an image from a single viewpoint. Recently, a new computational II reconstruction method has been proposed [23]. This technique uses all of the information of the elemental images to reconstruct the full 3D volume of a scene. It allows us to reconstruct 3D voxel values at any arbitrary distance from the display microlens array without limiting the image quality due to diffraction and optical device degradations.

A full 3D reconstructed scene is achieved by processing the optically obtained elemental images using digitally simulated geometrical optics. However, when the 3D object is located very close to the square-shaped pickup lenslet array in the pickup process, the reconstructed image shows intensity irregularities, appearing like a grid superimposed upon the image. The overlapping numbers of inversely mapped elemental images to reconstruct a certain pixel may be different from the corresponding numbers used to reconstruct the adjacent pixels. These variations in intensity generate the grid structure. Therefore, some adjacent reconstructed pixels may have significant intensity irregularities, which result in the degradation of the quality of the reconstructed images. In this paper, to obtain a full 3D reconstructed scene without the grid structure, we normalize the reconstructed pixels by the number of the overlaps of inversely mapped elemental images used in the computational 3D volumetric reconstruction II. To further increase the reconstructed image resolution, we also apply time multiplexing of the elemental images using the moving array lenslet technique (MALT) [22]. The increased sampling rate using MALT guarantees the enhanced quality of the reconstructed images. In this paper, we perform the MALT optically in a pickup process, and computationally in a reconstruction process, which we refer to as hybrid MALT. The structure of this paper is as follows. Overviews of 3D volumetric computational II reconstruction and MALT are discussed in Section 2. The improved reconstruction of volumetric II reconstruction of a 3D scene with normalization and hybrid MALT are provided in Section 3, and the conclusion is described in Section 4.

2. Overview

2.1 3D volumetric reconstruction using computational II

In the pickup process of the II system, each voxel of the 3D object is mapped into the imaging plane of the pickup microlens array to form the elemental images and is recorded by a CCD camera. Therefore, each voxel of the 3D object (such as the car in Fig. 2) within the viewing angle of the system contributes to form the pixels of the elemental images. Each elemental image has different perspectives and distance information of the 3D object. The 3D volumetric reconstruction by computational II extracts pixels from the elemental images by an inverse mapping through a computer synthesized (virtual) pinhole array, and displays the corresponding voxels [23]. The elemental images inversely mapped through synthesized pinhole array may overlap each other at any display plane z for M > 1, where M is the magnification factor. As shown in Fig. 3, it is the ratio of the distance between the synthesized pinhole array and the reconstruction image plane at z, to the distance between the synthesized pinhole array and the elemental image plane (g), that is M = z/g. The intensity at the reconstruction plane is inversely proportional to the square of the distance between the elemental image and the reconstruction plane. The inverse mappings of all the elemental images corresponding to the magnification factor M form a single image at any reconstruction image plane z = L. It is possible to form the 3D volume information by repeating this process for all reconstruction planes of interest with different distance information. Therefore, we use all of the information of the recorded elemental images to reconstruct a full 3D scene, which requires simple inverse mapping and superposition operations.

2.2 Moving array lenslet technique

Generally, there exists convergence-accommodation conflict when the stereoscopic display is used [1]. In II, however, true 3D images can be integrated. The optical MALT [22] provides improved viewing resolution and viewing angle for true 3D images without convergence–accommodation conflict. Figure 1 is the illustration of the pickup and display II system using MALT. The pick up and display processes of the MALT are performed optically. With MALT, the spatial sampling rate is increased by moving the sampling points of the lenslet array rapidly and synchronously in the lateral directions. The movement speed is adjusted to be within the retention time of the afterimage effect of the human eye for both pickup and display processes. In other words, spatial ray information from the stationary lenslet array is time multiplexed. The movement range does not need to be larger than one pitch, because the lenslet array is periodic. The movement of the lenslet has flexibility as far as the pickup lenslet array and display lenslet array move synchronously, that is, they can be moved in the direction of x axis, y axis, diagonally, or in a circular path. Because the pickup lenslet array and display lenslet array move synchronously and fast enough in the plane perpendicular to the optical axis, what observers see is a reconstructed image with improved viewing resolution of an object. Let vx , vy be the velocity of the lenslet array movement in the x, y axes, respectively. Then, for the stationary object, the velocities in each direction should satisfy the following relations: vx >px S, vy >py S, where, S is the inverse of the electronic shutter speed of a CCD or the inverse of the response time of the human eye, and px , py are the lenslet pitches in the x, y axes, respectively. As the lenslet array moves, the elemental images change and have different perspectives within one lenslet pitch. Different elemental image sets are recorded at different sampling points by the 2D image sensor. For optical reconstruction, these recorded elemental image sets are integrated in the time domain by displaying them on the SLM or LCD at the same sampling speed of the elemental images in the pickup procedure.

 figure: Fig. 1.

Fig. 1. An illustration of optical pickup and display of the II image using MALT. Pickup microlens array and display microlens array move synchronously. (malt.avi: 1.3 MB)

Download Full Size | PDF

3. 3D volumetric reconstruction with improved resolution

Figure 2 shows a toy car illuminated by incoherent light used as a 3D object in the experiments. The pickup microlens array is placed in front of the object to form the elemental image array. The distance between the pickup microlens array and the closest part of the car from the microlens array is 7 mm. The elemental images of the object are captured with the digital camera and the pickup microlens array. The microlens array used in the experiments has 53 × 53 square refractive lenses in a 55 mm square area. The size of each lenslet is 1.09 mm × 1.09 mm, with less than 7.6 μm separation. The focal length of each microlens is 3.3 mm. The size of each captured elemental image is 70 pixels × 70 pixels.

 figure: Fig. 2.

Fig. 2. 3D object used in the experiments.

Download Full Size | PDF

3.1 Hybrid MALT

We propose a hybrid MALT, which performs a pickup MALT process optically, and implements the display MALT process computationally. With this method, we can obtain multiple sets of elemental images without moving the display lenslet array at the required velocity of the lenslet arrays as in MALT with optical reconstruction [22]. Moreover, instead of moving the display lenslet array synchronously to the pickup lenslet array, we reconstruct the 3D scene computationally with a synthesized virtual pinhole array according to the calculated position and intensity of the inversely mapped elemental images.

Let k be the total number of elemental image sets captured with an optical MALT pickup process. Each unique elemental image set is obtained by moving the pickup lenslet array within one pitch of the lenslet. Figure 3 is the illustration of the lateral axis coordinates of the reconstruction plane according to the pth elemental image at (x, z) for the i-th elemental image set. (x, y) are lateral coordinates in the reconstructed image plane, and z is the longitudinal coordinate. In this example, compared with the 1st lenslet array, the i-th lenslet array has been moved downward. To represent the color images, we capture the elemental images and reconstruct the 3D image for three different wavelengths (blue, red, and green). Let Iipq be the pth row and the qth column elemental image of the i-th elemental image set, and Oipq (x, y, z; λ) be the inversely mapped image of the elemental image Iipq at the reconstructed voxel location (x, y, z) for each wavelength. Iipq is the image of the 3D object through the pth row and the qth column microlens for the i-th position of the pickup microlens array. Therefore, it is also a function of the wavelength of the rays emanating from the object [13]. Let xi /M, yi /M be the relative displacement of the lenslet array in x and y directions for i-th elemental image set compared with the 1st elemental image set, respectively. Then the relative displacements of Oipq (x, y, z; λ) at the reconstructed image plane are (xi , yi ) in the (x, y) direction, respectively. Therefore, Oipq (x, y, z; λ) can be represented as:

Oipq(xxi,yyi,z;λ)=Iipq(xxiM+(1+1M)sxp,yyiM+(1+1M)syq:λ),(z+g)2+[(xsxp)2+(ysyq)2](1+1M)2
for{sx(pM2)+xixsx(p+M2)+xisy(qM2)+yiysy(q+M2)+yi

where sx , sy are the sizes of elemental image Iipq in x, y directions, respectively. The relative displacements, xi , yi for the 1st elemental image set are zeros. For the i-th elemental image set, he reconstructed 3D image at (x, y, z) is the superposition of all the inversely mapped elemental images:

Oi(xxi,yyi,z;λ)=p=0m1q=0n1Oipq(xxi,yyi,z;λ),
 figure: Fig. 3.

Fig. 3. Illustration of the lateral coordinates of the elemental image plane and reconstructed image plane for the i-th elemental image set to perform a digital MALT reconstruction. The relative displacement at the elemental image plane is xi /M, and the relative displacement at the reconstructed plane is xi . The magnification factor M is M = z/g.

Download Full Size | PDF

where m, n are the number of the elemental images in x, y directions, respectively. From Eq. (2) we can obtain the computational hybrid MALT image at display distance z as a superposition for all k computationally reconstructed images with each displacement.

O(x,y,z;λ)=i=1kOi(xxi,yyi,z;λ),

where k is the total number of the pickup MALT process, that is, the total number of the recorded elemental image sets used to reconstruct the 3D scene.

In our experiments, we have used 12 different elemental image sets obtained by moving the pickup lenslet array within one pitch of a lenslet. As we move the pickup lenslet array, we can obtain the elemental images with different perspectives. Examples of the captured elemental image arrays that are used to reconstruct the 3D scene are shown in Fig. 4. Since the object is close to the microlens array, the elemental images of the front and rear parts of the object are not formed in the same imaging plane. Therefore, the elemental images may contain slightly blurred parts of the object. The relative displacement between the first set of elemental images and the other 11 sets of elemental images are shown in Table 1 in terms of the number of the pixels. Roughly speaking, the pickup lenslet array has been moved diagonally. If xi /M or yi /M exceeds the pixel size of one pitch, we can take the modular value of the displacement, that is, xi /M mod (pitch) or yi /M mod (pitch), which are less than one pitch. For the elemental image set 12, pixel displacement in x direction is 72, which can be treated as 72 mod (70) = 2 pixel displacement.

 figure: Fig. 4.

Fig. 4. Example of two different sets of the elemental images used to reconstruct the 3D scene. Elemental image set 1 (left figure) and 7(right figure) are shown.

Download Full Size | PDF

Tables Icon

Table 1. The number of pixels of movement in elemental images at each position

3.2 Intensity normalization

Each pixel of the reconstructed image in the display plane at distance z is reconstructed by the overlapping of the magnified multiple elemental images when M > 1. In our experimental setup, the distance between the elemental images and the lenslet array is 3.3 mm. Therefore, the reconstructed plane beyond 3.3 mm from the display lenslet array will have overlapped inversely mapped elemental images. Because the number of the overlaps of elemental images to reconstruct each voxel is related to the (x, y, z) position and the size of the elemental images, the reconstructed adjacent voxel intensities may be different. We propose a method to reduce the intensity irregularities of the reconstructed adjacent pixels by normalizing the intensity value of each reconstructed image voxel by the number of the overlaps of the inversely mapped elemental images. Figure 5 is the reconstructed image at distance z = 7 mm for elemental image set #1 without normalization. The size of the reconstructed image is 2170 × 1190 pixels. As one can see, the image has different intensities according to the numbers of the overlapped magnified elemental images at z = 7 mm. Figure 6 shows the number of the overlaps of the inversely mapped elemental images to reconstruct the image shown in Fig. 5. Some pixels are reconstructed by superimposing 9 inversely mapped elemental images and some pixels are reconstructed using a single inversely mapped elemental image. Figure 7 is the reconstructed image after the normalization process with the number of the overlaps of the inversely mapped elemental images shown in Fig. 6. Even though the grid structure can be seen, normalized reconstructed image does not have many intensity irregularities compared with the reconstructed images before the normalization process, especially in the focused area (in this case, the right headlight part is focused). We can see more details in the focused area with the intensity normalization than without normalization.

 figure: Fig. 5.

Fig. 5. Reconstructed image at display distance z = 7mm with computational II. It is difficult to see the details even in the focused area (in this case, right headlight area)

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. The number of the overlapping of the magnified elemental images for the display plane at distance z = 7 mm. Each pixel value of this figure represents the number of the overlapping.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Reconstructed image after the normalization process at display distance z = 7mm. It is possible to see the details in the focused area (in this case, right headlight area)

Download Full Size | PDF

3.3 3D volumetric reconstruction using normalized hybrid MALT

After the normalization process is performed for the recorded k different elemental image sets, we perform computational MALT to obtain a 3D scene. At the display distance z, 12 different reconstructed images, which have been obtained using the computational II technique, are normalized and imported to the computational MALT reconstruction process to improve the resolution of the image at display distance z. A 3D scene is achieved by superimposing individually computed normalized II images. Figure 8 shows the reconstructed 3D object by MALT images at z = 7 mm after normalization.

 figure: Fig. 8.

Fig. 8. Reconstructed hybrid MALT image after the normalization process at display distance z = 7mm.

Download Full Size | PDF

As one can see, many of the grid structures have been removed by the use of the computational MALT. Instead, diagonal lines in the unfocused area (in this case, the rear part of the toy car) have occurred, due to the diagonal movement of the pickup lenslet in our experiments. However, this does not affect the quality of the focused reconstructed image. Figures 9(a) – (d) are the normalized computationally reconstructed MALT images at various display distances using the proposed normalization and hybrid MALT II reconstruction techniques. Figure 9(a) shows reconstructed images at a distance of z = 7 mm, where the right headlight area is well focused. Figure 9(b) shows the reconstructed image at a distance of z = 9 mm which focuses on the front emblem of the car. Figure 9(c) shows the reconstructed images at a distance of z = 11 mm, where we can clearly see the left headlight and rear part of the right front wheel cap. Figure 9(d) shows the reconstructed image at a distance of z = 21 mm, where we can see the rear area of the car. In this case, we can see the blurred front area of the car. Figure 10 is a movie of a series of the reconstructed 3D volume imagery using the proposed computational II from the image display plane at z = 6 mm to the image display plane at z = 30 mm with increment of 0.1 mm. Using the proposed normalized computational II reconstruction with hybrid MALT, it is possible to obtain improved resolution of a full 3D volume reconstructed image.

4. Conclusion

A new 3D object visualization method using computational 3D volumetric reconstruction II and hybrid MALT has been proposed to increase the resolution of the voxel values of a 3D object or a 3D scene. Images along the longitudinal axis are reconstructed computationally from several optically recorded elemental images at various positions using a moving pickup microlens array. For each captured elemental image set, we reconstruct 3D images. Each reconstructed image is normalized to reduce the intensity irregularities, which occur due to the different numbers of overlaps of the magnified elemental images. The normalized reconstructed images are imported to the computational MALT reconstruction process. With the reconstructed 3D images using the proposed reconstruction method, one can observe the improvement of the visual quality of the reconstructed 3D scene due to the increased resolution. The experimental results show that we can reconstruct improved quality images using the proposed computational II reconstruction method compared with the reconstructed images using an optical II reconstruction method and/or the conventional II computational reconstruction methods which do not use MALT.

 figure: Fig. 9.

Fig. 9. Reconstructed hybrid MALT image after the normalization process (a) at display distance z = 7 mm. (b) at display distance z = 9 mm. (c) at display distance z = 11 mm. (d) at display distance z = 21 mm.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Movie of the reconstructed 3D volume imagery from the image display plane at z = 6 mm to the image display plane at z = 30 mm with increment of 0.1 mm. (volume.avi: 685 KB)

Download Full Size | PDF

References and Links

1. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham, WA, 2001).

2. D. H. McMahon and H. J. Caulfield, “A technique for producing wide-angle holographic displays,” Appl. Opt. 9, 91–96 (1970). [CrossRef]   [PubMed]  

3. P. Ambs, L. Bigue, R. Binet, J. Colineau, J.-C. Lehureau, and J.-P. Huignard, “Image reconstruction using electro-optic holography,” Proc. of the 16th Annual Meeting of the IEEE Lasers and Electro-Optics Society, LEOS 2003, vol. 1 (IEEE, Piscataway, NJ, 2003) pp. 172–173

4. N. Davies, M. McCormick, and M. Brewin, “Design and analysis of an image transfer system using microlens array,” Opt. Eng. 33, 3624–3633 (1994). [CrossRef]  

5. M. Martínez-Corral, M. T. Caballero, and A. Pons, “Axial apodization in 4Pi-confocal microscopy by annular binary filters,” J. Opt. Soc. Am. A 19, 1532–1536 (2002). [CrossRef]  

6. B. Javidi and F. Okano, eds., Three Dimensional Television, Video, and Display Technologies (Springer, Berlin, 2002)

7. J. W. V. Gissen, M. A Viergever, and C. N. D. Graff, “Improved tomographic reconstruction in seven-pinhole imaging,” IEEE Trans. Med. Imag. MI-4, 91–103 (1985) [CrossRef]  

8. L. T. Chang, B. Macdonald, and V. Perez-Mendez, “Axial tomography and three dimensional image reconstruction,” IEEE Trans. Nucl. Sci. NS-23, 568–572 (1976) [CrossRef]  

9. T. Okoshi, Three-dimensional imaging techniques (Academic Press, New York, 1976).

10. G. Lippmann, “La photographic intergrale,” C. R. Acad. Sci. 146, 446–451 (1908).

11. H. E. Ives, “Optical properties of a Lipmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]  

12. D. L. Marks and D. J. Brady, “Three-dimensional source reconstruction with a scanned pinhole camera,” Opt. Lett. 23, 820–822 (1998) [CrossRef]  

13. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, NY, 1996).

14. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001) [CrossRef]  

15. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]   [PubMed]  

16. J.-S. Jang and B. Javidi, “Formation of orthoscopic three-dimensional real images in direct pickup one-step integral imaging,” Opt. Eng. 42, 1869–1870 (2003). [CrossRef]  

17. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real time integral photography for three-dimensional Images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]  

18. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging” Appl. Opt. 42, 7036–7042 (2003). [CrossRef]   [PubMed]  

19. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968).. [CrossRef]  

20. T. Okoshi, “Optimum design and depth resolution of lens sheet and projection type three dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

21. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998) [CrossRef]  

22. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

23. S. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Optics Express , 12, 483–491 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-483 [CrossRef]   [PubMed]  

Supplementary Material (2)

Media 1: AVI (1304 KB)     
Media 2: AVI (685 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. An illustration of optical pickup and display of the II image using MALT. Pickup microlens array and display microlens array move synchronously. (malt.avi: 1.3 MB)
Fig. 2.
Fig. 2. 3D object used in the experiments.
Fig. 3.
Fig. 3. Illustration of the lateral coordinates of the elemental image plane and reconstructed image plane for the i-th elemental image set to perform a digital MALT reconstruction. The relative displacement at the elemental image plane is xi /M, and the relative displacement at the reconstructed plane is xi . The magnification factor M is M = z/g.
Fig. 4.
Fig. 4. Example of two different sets of the elemental images used to reconstruct the 3D scene. Elemental image set 1 (left figure) and 7(right figure) are shown.
Fig. 5.
Fig. 5. Reconstructed image at display distance z = 7mm with computational II. It is difficult to see the details even in the focused area (in this case, right headlight area)
Fig. 6.
Fig. 6. The number of the overlapping of the magnified elemental images for the display plane at distance z = 7 mm. Each pixel value of this figure represents the number of the overlapping.
Fig. 7.
Fig. 7. Reconstructed image after the normalization process at display distance z = 7mm. It is possible to see the details in the focused area (in this case, right headlight area)
Fig. 8.
Fig. 8. Reconstructed hybrid MALT image after the normalization process at display distance z = 7mm.
Fig. 9.
Fig. 9. Reconstructed hybrid MALT image after the normalization process (a) at display distance z = 7 mm. (b) at display distance z = 9 mm. (c) at display distance z = 11 mm. (d) at display distance z = 21 mm.
Fig. 10.
Fig. 10. Movie of the reconstructed 3D volume imagery from the image display plane at z = 6 mm to the image display plane at z = 30 mm with increment of 0.1 mm. (volume.avi: 685 KB)

Tables (1)

Tables Icon

Table 1. The number of pixels of movement in elemental images at each position

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

O i pq ( x x i , y y i , z ; λ ) = I i pq ( x x i M + ( 1 + 1 M ) s x p , y y i M + ( 1 + 1 M ) s y q : λ ) , ( z + g ) 2 + [ ( x s x p ) 2 + ( y s y q ) 2 ] ( 1 + 1 M ) 2
for { s x ( p M 2 ) + x i x s x ( p + M 2 ) + x i s y ( q M 2 ) + y i y s y ( q + M 2 ) + y i
O i ( x x i , y y i , z ; λ ) = p = 0 m 1 q = 0 n 1 O i pq ( x x i , y y i , z ; λ ) ,
O ( x , y , z ; λ ) = i = 1 k O i ( x x i , y y i , z ; λ ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.