Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Extraction of location coordinates of 3-D objects from computationally reconstructed integral images basing on a blur metric

Open Access Open Access

Abstract

In this paper, a novel approach to effectively extract location coordinates of 3-D objects employing a blur metric has been proposed. With elemental images of 3-D objects, plane object images (POIs) were reconstructed along the output plane using the CIIR (computational integral imaging reconstruction) algorithm, in which only the POIs reconstructed on the output planes where 3-D objects were originally located are focused whereas the other ones are blurred. Therefore, by calculating these blur metrics of the reconstructed POIs depth data of 3-D objects could be extracted. That is, the blur metric is the lowest on the focused point, but it starts to increase as (fill in the blank) moves away from that point. Accordingly, by finding out the points of inflection in the map of blur metric variation, the output planes where the objects were located were finally detected. To show the feasibility of our proposed scheme, some experiments were carried out and its results are presented as well.

©2008 Optical Society of America

1. Introduction

Extraction of depth cue of three-dimensional (3-D) objects in the real world has been known as one of the important issues in the fields of machine vision, target recognition, tracking and video surveillance, and etc. [1,2].

For this, a camera system with an active range-finding sensor, called a depth camera has been introduced [3]. It can provide us with depth data of 3-D objects in space in addition to its color image data. But, this camera system basically has been developed for the purpose of television program production, and so its structure and cost have been known to be too complex and too high for commonplace uses.

Moreover, stereo camera–based depth extraction schemes [4,5] also have been proposed. In these systems, stereoscopic video images of 3-D objects were captured with a stereo camera, from which depth data of 3-D objects were extracted by estimation of disparities between the right and left captured images.

Recently, some new integral imaging-based depth extraction methods have been proposed [6]. As elemental images picked-up in the integral imaging system have their own perspectives of 3-D objects, disparities must exist between the picked-up elemental images. Therefore, the depth data can be estimated from disparities between elemental images in these schemes. However, as elemental images are captured through a pinhole (or a lenslet) array, the absolute resolution of each picked-up elemental image should be severely decreased according to the dimension of an employed pinhole array. Hence, it might be difficult to obtain the accurate depth data of 3-D objects from these low-resolution elemental images in the conventional integral imaging-based depth extraction methods.

Basically, there are two kinds of integral imaging reconstruction techniques, optical integral imaging reconstruction (OIIR) technique [7] and computational integral imaging reconstruction (CIIR) technique [8–13]. Between them, the CIIR technique can computationally reconstruct object images by mapping elemental images inversely using a virtual pinhole (or lenslet) array based on ray optics. In this method, object images can be reconstructed as a form of depth-dependent plane object images (POIs) along the output plane.

Recently, a CIIR based 3-D image correlator system for extraction of location data of the 3D target in space was proposed [9]. Here, in this system, elemental images of the reference and target objects were picked up with lenslet arrays, and using these elemental images reference and target plane images were reconstructed at the output plane by using the CIIR technique. Then, just by cross-correlations between the reconstructed reference and target plane images, the longitudinal distances of target objects could be extracted.

An ability to reconstruct the POIs of 3-D objects along the output plane is a unique feature of the CIIR method. In this approach, only the POIs reconstructed on the output planes where 3-D objects were originally located are clearly focused, whereas the other ones reconstructed away from these focused planes are getting out of focus. That is, the reconstructed output images might be consisted of the clearly focused and the blurred POIs. Therefore, depth data of 3-D objects in a scene can be extracted by estimating the blur metric of each POI [14–16].

Since blur is perceptually apparent along edges or in textured noises, blur measurement is based on the smoothing effect of blur on edges and consequently attempted to measure the spread of edges [14,15]. The blur metric of each POI is estimated for examining which POI is in focus or out of focus. With this estimated blur metric, we can examine the points of inflection. That is, the gradient of the blur metric always changes from the negative to the positive value, which means that the POIs reconstructed at the points where the objects were originally located are focused, and before and after these points, the reconstructed images would start to be out of focus. In other words, at the focused point, the blur metric become to be the lowest, whereas moving away from that point, the blur metric sharply increases. Therefore, from these estimated blur metrics, the right output planes where the objects were originally located can be detected.

Accordingly, in this paper, a novel approach to effectively extract depth data of 3-D objects in a scene by estimating the blur metrics of POIs reconstructed by the CIIR technique is proposed and its performance is analyzed. In addition, to test the feasibility of the proposed method, some experiments with test objects are carried out and the results are discussed.

2. Operational characteristics of the integral imaging system

Fundamentally, an integral imaging system consists of two processes: pickup and display as shown in Fig. 1. In the pickup process, intensity and directional information of the rays coming from a 3-D object through a pinhole array is recorded by use of a charge coupled device (CCD) camera as a form of two-dimensional (2-D) elemental image array (EIA) representing different perspectives of a 3-D object as shown in Fig. 1(a). On the other hand, in the display process, as it is the reverse of the pickup process, the recorded EIA are displayed on a display panel such as a liquid crystal display (LCD) and then the 3-D image can be optically reconstructed and observed through a display pinhole array as shown in Fig. 1(b).

 figure: Fig. 1.

Fig. 1. Conceptual diagram of pickup and display processes in the integral imaging system: (a) Pickup process, (b) Display process.

Download Full Size | PDF

3. Proposed blur metric-based depth extraction method

Figure 2 shows a flowchart of the proposed method to effectively extract depth data of 3D objects from the computationally reconstructed POIs by employing a novel blur metric. The proposed scheme consists of three parts: pickup, reconstruction and depth extraction.

First, intensity and directional information of the rays coming from a virtual 3-D object is digitally generated as a form of 2-D elemental images representing different perspectives of a virtual 3-D object. From these computationally generated elemental images, a set of the POIs of a 3-D object are reconstructed along the output plane by using the CIIR technique [11]. Then, the blur metric of each reconstructed POI is estimated and finally depth data of the objects are extracted by examining the points of inflection in the estimated blur metric. Specifically, at the focused point, the blur metric become to be the lowest, whereas the blur metric sharply increases moving away from that point. Therefore, from this estimated blur metric, the right output planes where the objects were originally located can be detected.

 figure: Fig. 2.

Fig. 2. Flowchart of the proposed scheme

Download Full Size | PDF

3.1 Pickup part

As illustrated in Fig. 3, a 3-D test object used in this paper is consisted of three 2-D objects, named ‘Target 1’, ‘Target 2’ and ‘Target 3’, which are located at z1 = 30 mm, z2 = 45 mm and z3 = 60 mm in front of the pinhole array, respectively. In the pickup part, elemental images of the 3-D test object are computationally picked up. Here, the distance between the virtual pinhole array and the elemental image plane is assumed to be 3 mm. The resolution of the picked-up EIA is given by 1,292 by 760 because it is assumed that the lenslet array is consisted of 34 by 20 lenslets and the resolution of each lenslet is 38 by 38 pixels. Resolution of each of the three 2-D objects is given by 430 by 360 pixels, and its center location in the image plane of 1,292 by 760 is set to be (291, 548), (599, 201) and (970, 520), respectively.

 figure: Fig. 3.

Fig. 3. Systematic diagram of computational pickup of a 3-D test object

Download Full Size | PDF

Figure 4 shows the computationally generated elemental image array of the test object having a resolution of 1,292 by 760 pixels.

 figure: Fig. 4.

Fig. 4. Computationally generated elemental image array of the test object

Download Full Size | PDF

3.2 Reconstruction part

POIs of the test object can be computationally reconstructed from the picked-up elemental images of Fig. 4 by using the CIIR technique. Figure 5 shows a conceptual diagram of the CIIR technique for reconstruction of the POIs on the output plane of zl (z = L) [11]. At the distance of zl, each picked-up elemental image is projected inversely through the corresponding virtual pinhole array. By digitally simulating the reconstruction process basing on the ray optics approach, the EIA can be inversely magnified according to the magnification factor of M = L/g, in which M is the ratio of the distance between the virtual pinhole array and the reconstructed image plane (L) to the distance between the virtual pinhole array and the EIA plane (-g).

 figure: Fig. 5.

Fig. 5. Schematic of CIIR-based reconstruction of POIs from the picked-up EIA

Download Full Size | PDF

For M>1, the inversely mapped images through each virtual pinhole array are overlapped and summated with each other on the reconstructed image plane of zl. Assuming that the respective sizes of an elemental image in the vertical and horizontal directions are given by a and b, the vertical and the horizontal sizes of the mapped image on the reconstructed image plane are given by Ma and Mb, respectively according to the magnification factor of M = L/g. The enlarged elemental image is overlapped and summed at the corresponding pixels of the reconstruction image plane. For the complete reconstruction of the POI at a given distance, the process mentioned above is repeatedly performed to all of the obtained elemental images through each corresponding pinhole.

As the reconstruction plane is differently taken by increasing the distance from the pinhole array with a small incremental step of Δz, a set of POIs can be finally reconstructed.

Figure 6 shows 9 kinds of POIs of the test object reconstructed from the picked-up EIA of Fig. 4 by using the CIIR technique with an incremental step of Δz = 3 mm.

 figure: Fig. 6.

Fig. 6. POIs of the objects reconstructed along the output plane of z

Download Full Size | PDF

Figure 6 shows that only the POIs reconstructed on the output planes of z1 = 30 mm, z2 = 45 mm and z3 = 60 mm where the objects were located during the pickup process are clearly focused, but getting away from these planes, the POIs are getting out of focus and appearing to be blurred. Therefore, depth data of the objects could be obtained from the estimation of the focusing (or defocusing) parameters of the reconstructed POIs, so-called a blur metric.

3.3 Depth extraction part

As shown in Fig 6, CIIR technique can provide us with a set of POIs reconstructed along the output plane by increasing the z distance with a small step away from the pinhole array.

Here, the blur metrics of the reconstructed POIs are estimated and basing on these values depth data of the test objects are finally extracted by examining the points of inflection of these estimated blur metrics [15]. From these depth data of the test objects, the right output planes where the test objects were originally located can be finally detected.

3.3.1 Estimation of the blur metric

First, an input gray or color image I(x,y) is given, in which x and y are the row and column coordinate in an image, respectively. Additionally, Ii(x,y) is the ith channel of the image I(x,y), i.e., i = 1 for gray channel, i = 3 for R, G, B channel. The gradient at any pixel point of P(x,y) can be calculated by use of a 2-D directional derivative (sobel operator has been used in this paper) as Eq. (1).

Ii(x,y)=[GxGy]=[xIi(x,y)yIi(x,y)]

Then, the magnitude and orientation of the gradient can be obtained from Gx and Gy as shown in Eq. (2) and Eq. (3).

Ii(x,y)=Gx2+Gy2
θi(x,y)=tan1(GyGx)

From Eq. (1), each edge point in a POI is set to zero unless it is a local maximum point along a line oriented along the gradient orientation by non-maximum suppression. Let Pe(xe,ye) be a local maximum edge point in a POI, then its magnitude and orientation are given by | Ii(xe,ye)| and θi(xe,ye), respectively. In order to calculate the spatial variation, the ψ-axis is introduced here, in which the origin of the ψ-axis is at the pixel point of Pe(xe,ye) and the direction is the normal to the θi(xe,ye), as shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Illustration of the Ψ-axis establishment for Pe(xe,ye)

Download Full Size | PDF

Consider ml(xl,yl) and mr(xr,yr) as the nearest local minima on left and right side of the local maximum point Pe(xe,ye), respectively i.e., ml is on the negative ψ–axis and ml is on the positive ψ–axis because the origin of the ψ–axis is at Pe(xe,ye). As a discrete probability distribution with the mean at the point of Pe(xe,ye), corresponding to ψ = 0, mathematically, the spatial variance is calculated by Eq. (4).

σi2(pe)=1mrmlψ=mlmrIi(ψ)ψ2

The blur metric βi(Pe) for a local maximum edge point Pe(xe,ye) can be obtained by computing the weighted average of the standard deviation σi and the magnitude of an edge | Ii(pe)|, which is given by Eq. (5) [15].

βi(pe)=ηβσi(pe)σimax+(1ηβ)Ii(pe)Ii(pe)max

where σimax and | Ii(pe)|max are normalization terms denoting the maximum values for all standard deviations and for all edge gradient magnitudes, respectively. The weight ηβ is related to image contrast. For the case of the identical weight of σ and |i(pe)|, regardless of image contrast, Eq. (5) can be modified into Eq. (6).

βi(pe)=12(σi(pe)σimax+Ii(pe)Ii(pe)max)

This makes it possible to eliminate processing time for the multi-scale retinex (MSR) algorithm [17]. Therefore, blur measure of the edges of the reconstructed POI on an arbitrary output plane can be calculated by Eq. (6).

3.3.2 Calculation of the mean blur metric

The mean blur metric (MBM) of each POI as show in Eq. (7) can be calculated by using the following procedure. First, each blur metric for a local maximum edge point in a POI is summed and divided by the number of local maximum edge point as the average blur metric per local maximum edge point, precisely, βimean. And then, βimean is divided by the ratio of the number of local maximum points per channel Mi to the number of pixels in a POI N. Therefore, α represents the percentage of local maximum edge points per channel in a POI.

The mean blur metric in a POI, βMBM is then given by Eq. (7) and it can provide discrimination in depth detection of the target objects per POI, not per point.

βMBM=αi×βimean=MiiN×βimean

Where N is the number of pixels in a POI and Mi is the number of local maximum points per channel. From Eq. (7), the MBM of each POI can be calculated and their results are depicted in Fig. 8. Figure 8 shows a variation of the MBM along the output plane and we can see three points of inflection in this figure, which indicates that there are three potential objects. As noted above, the MBM becomes to be the lowest at the focused point, where the object was originally located, whereas the MBM sharply increases moving away from that point.

 figure: Fig. 8.

Fig. 8. MBM values of the POIs reconstructed along the output plane (For non-overlapped objects case).

Download Full Size | PDF

Table 1 shows location, MBM, and gradient values of three points of inflection. For example, as shown in Fig. 8 the first point of inflection occurred at z = 30 mm, on which the MBM value was found to be 30.47×10-4, whereas it sharply increased up to 54.57×10-4, 50.17×10-4 on the neighboring points of 27 and 33 mm, respectively. At the same time the gradients of the MBM on the neighboring points of 27 and 33 mm were calculated to be - 8.03×10-4 and +6.56×10-4, respectively. This change of the gradient of the MBM from negative to the positive value finally confirmed that there should be a point of inflection between them and that one object might exist on the plane of z = 30 mm. Therefore, from Table 1, the right output planes where the ‘Target 1’, ‘Target 2’ and ‘Target 3’ were originally located could be easily found to be z = 30, 45 and 60 mm, respectively.

Tables Icon

Table 1. Three points of inflection for totally non-overlapped objects

In addition, once the longitudinal positions where the objects are located have been detected, lateral coordinates of the objects can be also obtained through correlations between the reference object images and the corresponding POIs reconstructed on the right planes of z = 30, 45 and 60 mm, where the objects were originally located. Figure 9 shows lateral profiles of their correlation results. From the correlation outputs of Figs. 9(a), 9(b), 9(c), NCC (normalized cross correlation) values and lateral correlation positions for the ‘Target 1’, ‘Target 2’ and ‘Target 3’ were found to be 0.8659, 0.5723, 0.8087 and (291, 548), (599, 201), (970, 520), respectively. Finally 3D location coordinates of the test objects in space can be obtained by (291, 548, 30), (599, 201, 45) and (970, 520, 60), respectively at the Cartesian coordinates system basing on the experimental results of Fig. 8 and Fig. 9. By comparing these results with the original location coordinates of the test objects it was found that they are exactly the same, which might confirm that the proposed method can provide a good discrimination performance in detection of depth and location coordinate of the target objects in space.

 figure: Fig. 9.

Fig. 9. Correlation outputs for ‘Target 1, 2, 3’

Download Full Size | PDF

Accordingly, even though prior information about the output planes where objects were originally located was not given in advance, depth data of the objects can be accurately detected by simply estimating the MBM of the POIs in the proposed method. Finally, these successful results in experiments explained above reveal that depth and 3D location data of the target object in space could be effectively detected by use of the proposed scheme employing a blur metric and confirm the feasibility of the proposed method in many practical applications such as machine vision, target recognition and tracking, video surveillance, and etc.

3.4 Depth extraction for the case of totally overlapped objects

So far, we discussed the proposed depth extraction method for the case of totally non-overlapped objects (here we call it ‘Case 1’) along the z-direction. Here, we performed the same experiments for the extreme case of totally overlapped objects (here we call it ‘Case 2’) to confirm the versatility of the proposed scheme. As illustrated in Fig. 10, a test 3-D object is consisted of three 2-D objects, named ‘Target 1’, ‘Target 2’ and ‘Target 3’, which are located at z1 = 30 mm, z2 = 45 mm and z3 = 60 mm in front of the lenslet array, respectively just like those of Fig. 3, except that they are totally overlapped along the z-direction. The employed lenslet array is identical with that of Fig. 3. Resolution of each of three 2-D objects is also given by 430 by 360 pixels, but the center locations of the objects are set to be (622, 375), (623, 378) and (625, 372), respectively, so that they are totally overlapped along the z-direction.

 figure: Fig. 10.

Fig. 10. Systematic diagram of computational pickup of totally overlapped objects Following the same procedure of the case of totally non-overlapped objects explained above, elemental images of the 3-D test object shown in Fig. 10 are computationally picked up, and from these picked-up elemental images POIs of the test object are reconstructed by use of the CIIR algorithm. Then, the MBM of each POI is calculated and the results are illustrated in Fig. 11. Figure 11 shows a variation of the MBM along the output plane and we can also see three points of inflection in this figure just like the case of Fig. 8, even though three targets were located to be totally overlapped in the z-direction.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. MBM values of the POIs reconstructed along the output plane (For overlapped objects case).

Download Full Size | PDF

Table 2 also shows location, MBM, and gradient values of three points of inflection for ‘Case 2’. For example, as shown in Fig. 11 the second point of inflection occurred at z = 45 mm, on which the MBM value was found to be 27.0×10-4, whereas it gradually increased up to 30.97×10-4, 33.87×10-4 on the neighboring points of 42 and 48 mm, respectively. At the same time the gradients of the MBM on the neighboring points of 42 and 48 mm were calculated to be -1.32×10-4 and +2.29×10-4, respectively. This change of the gradient of the MBM from the negative to the positive value finally confirmed that there should be a point of inflection between them and one object might exist on the plane of z = 45 mm. Therefore, from Table 2, the right output planes where the ‘Target 1’, ‘Target 2’ and ‘Target 3’ were originally located could be found to be z = 30, 45 and 60 mm, respectively.

Tables Icon

Table 2. Three points of inflection for totally overlapped objects

Moreover, through correlations between the reference object images and the corresponding POIs reconstructed on the right planes of z = 30, 45 and 60 mm, lateral positions of ‘Target 1’, ‘Target 2’ and ‘Target 3’ were found to be (622, 375), (623, 378) and (625, 372), respectively. Accordingly, 3D location coordinates of the test objects were finally obtained to be (622, 375, 30), (623, 378, 45) and (625, 372, 60), respectively. Comparing these results with the original location coordinates of the test objects they are also found to be the same with each other.

By comparing the results of ‘Case 1’ with those of ‘Case 2’, it indisputably confirms that in both cases three points of inflection were occurred at the same planes of z = 30, 45 and 60 mm where the objects were originally located, so that depth data of the three objects could be accurately detected from the proposed method whether they might be overlapped or not along the output plane. However there are some differences in gradient and MBM values between two extreme cases. First, there were much bigger changes in gradient values around the inflection points in ‘Case 1’ than ‘Case 2’. These results are due to relatively weaker interaction between target objects in ‘Case 1’ comparing with that in ‘Case 2’. Second, the overall MBM values of the ‘Case 2’ are relatively lower than those of the ‘Case 1’. This is due to Mi which is the number of local maximum points in a POI of ‘Case 2’ is much less than that of the ‘Case 1’.

Therefore, as target objects are getting overlapped together along the z-direction, discrimination performance of the points of inflection in the MBM curve might be getting deteriorated a little bit. Anyhow, through successful experiments on depth extraction for two extreme cases of totally non-overlapped and overlapped objects, the feasibility of the proposed method in the practical applications can be validated.

3.5 Depth extraction for the case of real objects

In addition, some optical experiments with a real 3D object were also carried out, in this paper, to suggest a possibility of practical implementation of the proposed method. In the experiment, a simple 3-D object was used, which is consisted of two 2-D pattern objects, named ‘Object 1’, ‘Object 2’, and located at zo1 = 30 mm and zo2 = 45 mm, respectively in front of the lenslet array. Here, a lenslet array with 34×20 lenslets was used, which was located at z=0 mm. Each lenslet size is 1.08 mm, the focal length of a lenslet is 3 mm and a single elemental image is composed of 38×38 pixels.

Figure 12 shows the experimental setup for pickup of elemental images of totally non-overlapped real objects, ‘Object 1’ and ‘Object 2’. With this optical pickup system of Fig. 12, elemental images of the real objects were captured by use of the lenslet array and the CCD camera.

 figure: Fig. 12.

Fig. 12. Experimental setup for optical pickup of elemental images of real obejcts

Download Full Size | PDF

From these picked-up elemental images, POIs of the real object were reconstructed by use of the CIIR algorithm. Then, the MBM of each POI is also calculated and the results are illustrated in Fig. 13. Figure 13 shows a variation of the MBM along the output plane and we can also see two points of inflection in this figure just like the computational cases of Figs. 8 and 11.

 figure: Fig. 13.

Fig. 13. MBM values of the POIs reconstructed along the output plane (For non-overlapped real objects case).

Download Full Size | PDF

Table 3 shows the calculated location, MBM, and gradient values of two points of inflection for real objects. That is, by the change of the gradient of the MBM from the negative to the positive value, the right output planes where the ‘Object 1’ and ‘Object 2’ were originally located could be found to be z = 30 and 45 mm, respectively. Comparing these results with the original location coordinates of the test objects, they were found to be the same with each other, which finally confirmed the feasibility of the proposed method even in the practical implementations.

Tables Icon

Table 3. Two points of inflection for real objects

Moreover, through correlations between the reference object images and the corresponding POIs reconstructed on the right planes of z = 30 and 45 mm, lateral positions of ‘Object 1’ and ‘Object 2’ were also additionally found to be (430, 285) and (735, 308), respectively. Accordingly, 3D location coordinates of the real objects were finally obtained to be (430, 285, 30) and (735, 308, 45), respectively.

4. Conclusion

In this paper, a novel approach to effectively extract depth data of 3-D objects using a blur metric was proposed. A set of POIs of 3D objects were reconstructed along the output planes using the CIIR algorithm, in which only the POIs reconstructed on the output planes where 3-D objects were located are focused whereas the other ones are blurred. Then, by estimating blur metrics of the reconstructed POIs depth information of 3-D objects were finally extracted. In addition, computational experiments were carried out for two extreme cases of non-overlapped and totally overlapped test objects as well as optical experiments with real objects were also performed. From successful results for all these cases the feasibility of the proposed blur metric-based location coordinated extraction method was finally confirmed.

Acknowledgment

This research was supported by the MIC (Ministry of Information an Communication), Korea, under the ITRC (Information Technology Research Center) support program supervised by the IITA (Institute of Information Technology Assessment) (IITA-2008-C1090-0801-0018)

References and links

1. J.-I. Park and S. Inoue, “Acquisition of sharp depth map from multiple cameras,” Signal Processing: Image Commun. 14, 7–19 (1998). [CrossRef]  

2. J.-H. Ko and E.-S. Kim, “Stereoscopic video surveillance system for detection of Target’s 3D location coordinates and moving trajectories,” Opt. Commun. 191, 100–107 (2006).

3. G. J. Iddan and G. Yahav, “Three-dimensional imaging in the studio and elsewhere,” Proc. SPIE 4298, 48–55 (2000). [CrossRef]  

4. J.-H. Lee, J.-H. Ko, K.-J. Lee, J.-H. Jang, and E.-S. Kim, “Implementation of stereo camera-based automatic unmanned ground vehicle system for adaptive target detection,” Proc. SPIE 5608, 188–197 (2004). [CrossRef]  

5. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004). [CrossRef]   [PubMed]  

6. J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express. 12, 6020–6032 (2004). [CrossRef]   [PubMed]  

7. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4195 (2003). [CrossRef]   [PubMed]  

8. B. Javidi, R. Ponce-Díaz, and S. -H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. , 31, 1106–1108 (2006). [CrossRef]   [PubMed]  

9. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt Commun. 276, 72–79 (2007). [CrossRef]  

10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]  

11. S. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express. 12, 483–491 (2004). [CrossRef]   [PubMed]  

12. D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a Lenslet Array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005). [CrossRef]  

13. D.-H. Shin, M. Cho, K.-C. Park, and E.-S. Kim, “Computational technique of volumetric object reconstruction in integral imaging by use of real and virtual image fields,” ETRI J. 27, 208–712 (2005). [CrossRef]  

14. P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, “A no-reference perceptual blur metric,” in the Proceedings of the International Conference on Image Processing , 3, 57–60 (2002).

15. Y. C. Chung, J. M. Wang, R. R. Bailey, and S. W. Chen, “A non-parametric blur measure based on edge analysis for image processing applications,” in the Proceedings of IEEE Conference on Cybernetics and Intelligent Systems (IEEE, 2004), 1, pp. 356–360.

16. R. Youmaran and A. Adler, “Using red-eye to improve face detection in low quality video image,” IEEE Canadian Conference on Electrical and Computer Engineering (IEEE, 2006), pp. 1940–1943.

17. Z. Rahman, D. J. Jobson, and G. A. Woodell, “A multiscale retinex for color rendition and dynamic range compression,” NASA Langley Technical Report (1996).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Conceptual diagram of pickup and display processes in the integral imaging system: (a) Pickup process, (b) Display process.
Fig. 2.
Fig. 2. Flowchart of the proposed scheme
Fig. 3.
Fig. 3. Systematic diagram of computational pickup of a 3-D test object
Fig. 4.
Fig. 4. Computationally generated elemental image array of the test object
Fig. 5.
Fig. 5. Schematic of CIIR-based reconstruction of POIs from the picked-up EIA
Fig. 6.
Fig. 6. POIs of the objects reconstructed along the output plane of z
Fig. 7.
Fig. 7. Illustration of the Ψ-axis establishment for Pe (xe,ye )
Fig. 8.
Fig. 8. MBM values of the POIs reconstructed along the output plane (For non-overlapped objects case).
Fig. 9.
Fig. 9. Correlation outputs for ‘Target 1, 2, 3’
Fig. 10.
Fig. 10. Systematic diagram of computational pickup of totally overlapped objects Following the same procedure of the case of totally non-overlapped objects explained above, elemental images of the 3-D test object shown in Fig. 10 are computationally picked up, and from these picked-up elemental images POIs of the test object are reconstructed by use of the CIIR algorithm. Then, the MBM of each POI is calculated and the results are illustrated in Fig. 11. Figure 11 shows a variation of the MBM along the output plane and we can also see three points of inflection in this figure just like the case of Fig. 8, even though three targets were located to be totally overlapped in the z-direction.
Fig. 11.
Fig. 11. MBM values of the POIs reconstructed along the output plane (For overlapped objects case).
Fig. 12.
Fig. 12. Experimental setup for optical pickup of elemental images of real obejcts
Fig. 13.
Fig. 13. MBM values of the POIs reconstructed along the output plane (For non-overlapped real objects case).

Tables (3)

Tables Icon

Table 1 Three points of inflection for totally non-overlapped objects

Tables Icon

Table 2 Three points of inflection for totally overlapped objects

Tables Icon

Table 3 Two points of inflection for real objects

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I i ( x , y ) = [ G x G y ] = [ x I i ( x , y ) y I i ( x , y ) ]
I i ( x , y ) = G x 2 + G y 2
θ i ( x , y ) = tan 1 ( G y G x )
σ i 2 ( p e ) = 1 m r m l ψ = m l m r I i ( ψ ) ψ 2
β i ( p e ) = η β σ i ( p e ) σ i max + ( 1 η β ) I i ( p e ) I i ( p e ) max
β i ( p e ) = 1 2 ( σ i ( p e ) σ i max + I i ( p e ) I i ( p e ) max )
β MBM = α i × β i mean = M i iN × β i mean
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.