Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational integral-imaging reconstruction-based 3-D volumetric target object recognition by using a 3-D reference object

Open Access Open Access

Abstract

In this paper, we propose a novel computational integral-imaging reconstruction (CIIR)-based three- dimensional (3-D) image correlator system for the recognition of 3-D volumetric objects by employing a 3-D reference object. That is, a number of plane object images (POIs) computationally reconstructed from the 3-D reference object are used for the 3-D volumetric target recognition. In other words, simultaneous 3-D image correlations between two sets of target and reference POIs, which are depth- dependently reconstructed by using the CIIR method, are performed for effective recognition of 3-D volumetric objects in the proposed system. Successful experiments with this CIIR-based 3-D image correlator confirmed the feasibility of the proposed method.

© 2009 Optical Society of America

Data sets associated with this article are available at http://hdl.handle.net/10376/1492. Links such as View 1 that appear in figure captions and elsewhere will launch custom data views if ISP software is present.

1. Introduction

Recently, there is much research work being done on three-dimensional (3-D) imaging and display technology due to high interest throughout the world [1, 2, 3, 4]. Among these studies, there has been considerable research and implementation attempts in three-dimensional (3-D) object recognition by use of digital holography [5, 6, 7], several perspectives of objects [8, 9, 10], and the integral imaging method [11, 12, 13, 14, 15]. In 3-D recognition using digital holography, coher ent illumination is needed [5, 6, 7]. This require ment prevents practical implementation of digital- holography-based 3-D recognition. Other methods for 3-D correlation involve the use of several perspectives of the objects [8, 9, 10]. These perspectives could be obtained by moving either the object or the camera. Recently, via several approaches, the integral imaging technique, which has been known as one of the popular recording and displaying techniques for 3-D scenes, was newly employed for the recognition of 3-D objects [11, 12, 13, 14, 15].

Basically, integral imaging is composed of two processes: pickup and reconstruction [16, 17]. In the pickup process, a set of demagnified images with different perspectives of a 3-D object, called an elemental image array (EIA), can be optically captured by using the CCD camera through a lenslet array or digitally generated by using the computer simulation model of ray optics.

There are also two kinds of reconstruction methods: optical integral imaging reconstruction [18, 19, 20, 21] and computational integral imaging reconstruction (CIIR) [22, 23, 24, 25, 26]. In the optical integral imaging re construction technique, 3-D object images can be optically reconstructed from the picked-up EIA, whereas in the CIIR technique, they can be computationally reconstructed.

Here, the CIIR technique can allow us to reconstruct a 3-D object image as a set of discrete plane object images (POIs) along the output plane [23, 24, 25, 26], which led to many kinds of CIIR-based 3-D object recognition method to be proposed by using this unique feature of the CIIR algorithm. In 2003, Kishk and Javidi proposed a performance-enhanced 3-D object recognition method using time-multiplexed computational integral imaging [13]. In 2006, B. Javidi et al. proposed a CIIR-based recognition method of partially occluded 3-D objects by using a spatial filter [15]. In 2008, Shin and Yoo proposed an effective CIIR-based 3-D object recognition method with scale- variant magnification [27], and Hwang et al. proposed a CIIR-based depth-extraction method using an image separation technique [28]. In 2009, Li et al. proposed a robust CIIR-based 3-D object recognition system using depth data of the picked-up elemental images [29].

In these systems, for recognition of the 3-D target object, a correlation operation between the target POIs reconstructed by the CIIR algorithm and the reference object image has mostly been employed. However, with the CIIR algorithm, only the POI reconstructed on the output plane where the object was originally located is clearly focused, whereas the other POIs reconstructed away from these focused planes are not. In other words, on the target plane, a clearly focused target POI and many defocused POIs resulting from the objects located on the other planes are simultaneously reconstructed, so these blurred POIs should appear as noise in the focused target POI in the process of correlations, which results in a significant performance reduction of the conventional correlation-based 3-D object recognition systems [28, 30].

Therefore, in most conventional CIIR-based 3-D object recognition systems a test object has been modeled as a 3-D object composed of a limited number of 2-D plane objects located on different depth planes, in which correlations were performed between the prepared 2-D reference object image and a set of 2-D target POIs reconstructed along the output plane by using the CIIR method [22, 23, 24, 25, 26].

Meanwhile, in the case of 3-D volumetric objects the CIIR algorithm could not uniquely reconstruct the focused target POIs from its picked-up EIA. Here, the 3-D volumetric object can be modeled as a number of infinitesimally sliced 2-D plane objects, so that the POIs reconstructed at any target planes are inevitably mixed with many defocused POIs resulting from neighboring image planes, which results in reconstruction of heavily blurred target POIs. Therefore, with these blurred target POIs we cannot get a high correlation value for recognition of 3-D volumetric objects.

Accordingly, in this paper, we propose a novel CIIR-based 3-D image correlator system for an effective recognition of 3-D volumetric objects by employing a set of reference POIs reconstructed by the CIIR technique, which might be regarded here as a 3-D reference object in the process of correlation with the target POIs reconstructed along the output plane. That is, in the proposed method, a group of depth- dependent reference POIs is used for 3-D target recognition, contrary to the conventional method, in which only one 2-D reference object image is used for making a correlation with a set of target POIs.

In other words, 3-D image correlations between two sets of target and reference POIs reconstructed by the CIIR method are carried out, so 3-D volumetric objects can be effectively recognized with this proposed CIIR-based 3-D image correlator. In this study a number of experiments with 3-D volumetric objects are performed to confirm the feasibility of the proposed method, and the results are discussed.

2. Conventional Computational Integral-Imaging Reconstruction Technique

The CIIR method is a computational reconstruction technique that maps picked-up elemental images inversely through a virtual pinhole array based on the computer simulation model of ray optics [22, 23, 24, 25, 26].

The operational principle of the conventional pinhole array-based CIIR algorithm is schematically described in Fig. 1. In this method, each elemental image of the picked-up EIA is inversely mapped onto the reconstructed image plane through each corresponding pinhole with a magnification factor of M=z/g, in which z and g are the distance from the reconstructed image plane to the virtual pinhole array and the distance from the EIA to the virtual pinhole array, respectively.

When M>1, the inversely mapped elemental images through all of the pinholes overlap one other on the reconstructed image plane, and finally a POI is reconstructed on the output plane of z. Repeating the above process by varying the output distance value of z, a set of POIs can be reconstructed along the output plane.

Basically this CIIR method can reconstruct the object image as a form of depth-dependent POIs along the output plane as shown in Fig. 2. However, in this scheme, not only the clearly focused target POI but also the defocused POIs are concurrently reconstructed on each output plane, so the image quality of the reconstructed target images is much degraded as shown in Fig. 2, which finally results in a performance reduction of the conventional CIIR-based object recognition system.

Therefore, most conventional CIIR-based 3-D object recognition systems have considered their test object as a 3-D object composed of a limited number of 2-D plane objects located at different depth planes, and correlations for object recognition were performed between a set of 2-D target POIs reconstructed along the output plane with the CIIR algorithm and the prepared 2-D reference object image.

On the other hand, a 3-D volumetric object can be modeled as an infinite number of sliced 2-D plane objects, so the POIs reconstructed at any target planes are inevitably mixed with many defocused POIs resulting from the neighboring image planes because this blurring effect gets bigger as the number of the objects and the distances among them increase and decrease. As a result, correlation performance between these reconstructed target POIs and the reference object image may be deteriorated.

Basically, for a 3-D volumetric object, the number of the 2-D sliced objects and the distances among them are so far and so close, respectively, that all of the reconstructed target POIs might be significantly affected by the blurring noise as a result, which means the CIIR algorithm could not uniquely reconstruct the target POIs for the 3-D volumetric object. Accordingly, an effective approach for recognition of 3-D volumetric objects with the CIIR technique needs to be developed.

3. Proposed Computational Integral-Imaging Based Reconstruction 3-D Image Correlator System

Figure 3 shows a block diagram of the proposed CIIR-based 3-D target recognition system. It largely consists of three steps: pickup of the EIA of 3-D target objects and its computational reconstruction along the output plane, pickup of the EIA of the 3-D reference object and its computational reconstruction along the output plane, and simultaneous correlations between the reconstructed target and reference POIs at each depth plane.

That is, in the first step, 3-D volumetric target objects, which are randomly located in space, are picked up as a form of the EIA with the lenslet array in the integral imaging system. From this picked-up EIA, a set of target POIs are reconstructed on each output plane corresponding to the distance of multiple times of g. In the second step, a set of EIA of the reference 3-D volumetric object is captured on each pickup plane corresponding to the distance of multiple times of g, and these picked-up EIAs are also reconstructed on each output plane corresponding to the distance of multiple times of g.

Finally, simultaneous correlations between the reconstructed target and reference POIs at each output plane corresponding to the multiple distance of g are performed for the recognition of the 3-D volumetric target object. In this method, a group of reference POIs reconstructed along the output plane is simultaneously used in the process of correlation with a set of target POIs, contrary to the conventional method, in which only one reference object image is used.

3A. Pickup and Reconstruction of 3-D Volumetric Target Objects

In this paper, as shown in Fig. 4, three 3-D car objects named Car 1, Car 2, and Car 3 are used as test 3-D objects, which are mutually nonoverlapped along the longitudinal direction and located at zptar1=15mm, zptar2=18mm, and zptar3=21mm, respectively, in front of the pinhole array. Each length of these test cars is estimated to be 15mm.

Then, the EIA of these 3-D target objects is computationally picked up and the result is shown in Fig. 5. That is, the EIA of the target objects with 1600 by 1600 pixels is digitally synthesized, the virtual pinhole array is assumed to have 40 by 40 pinholes, and the size of each elemental image is assumed to be 40 by 40 pixels.

The distance between the virtual pinhole array and the elemental image plane is assumed to be 3mm. In addition, the image resolution of Car 1, Car 2, and Car 3 is 162×74, 146×89 and 144×105 pixels, and its center location in the image plane of 1600 by 1600 is set to be (320,245), (98,102) and (294, 280), respectively.

From this picked-up EIA, POIs of the 3-D target objects can be reconstructed along the output plane by using the CIIR algorithm. As the reconstructed image planes can be taken differently by increasing the distance from the pinhole array with a small incremental step of Δz, a discrete set of POIs for the 3-D target objects with different depth data can be generated.

Here in this paper, the output image plane is increased from zrtar=3 to 30mm with an incremental step of Δz=3mm, so that 10 kinds of POIs for the 3-D target objects are obtained as shown in Fig. 6.

As we can see in Fig. 6, there are no clearly focused POIs for the 3-D target objects, because each target is not a 2-D plane object, but a 3-D volumetric object. For example, Fig. 6e shows the target POI reconstructed at the distance of 15mm, where the front side of Car 1 was originally located, so that only the front image of Car 1 is reconstructed to be focused a little bit, but the object images reconstructed on all the other planes are blurred a great deal. This result could be well explained based on the property of the CIIR algorithm mentioned above, by which only the POI reconstructed on the output plane where the object was originally located is clearly focused, whereas other POIs reconstructed away from these focused planes are not.

Figure 6f also shows the target POIs reconstructed at a distance of 18mm where the front side of Car 2 and part of Car 1 were originally located. As shown in Fig. 6f, the front image of Car 2 and some image between the front and middle part of Car 1 are reconstructed to be somewhat focused, but the object images reconstructed on the other planes are also blurred a great deal. In addition Fig. 6g shows the target POIs reconstructed at a distance of 21mm where the front part of Car 3 and some parts of Car 2 and Car 1 were located. As you can see in Fig. 6g, the middle image of Car 1, some image between the front and middle part of Car 2 and the front image of Car 3 are reconstructed to be focused, but the other objects reconstructed on other planes get blurred.

Furthermore in Figs. 6a, 6b, 6c, 6d, we cannot reconstruct the focused POIs for any target objects because they are all reconstructed on the output planes, where the objects were not originally located. On the other hand, in Figs. 6h, 6i, 6j, some image part of ‘Car 1, Car 2, and Car 3 are still reconstructed as a focused form because each object has its length of 15mm, which means that they were all captured in the pickup range.

Thus, it is well established from Fig. 6 that a small part of the 3-D object image is focused, but for the most part, the 3-D object image gets blurred in all target POIs reconstructed by using the CIIR technique. That is, for the 3-D volumetric objects, the CIIR algorithm could not uniquely reconstruct the focused target POIs because the CIIR technique only allows us to reconstruct a 3-D object image as a set of discrete POIs along the output plane. Moreover, the POIs reconstructed on any target planes are inevitably mixed with many defocused POIs resulting from the neighboring image planes, which causes the reconstruction of heavily blurred target POIs and deterioration of correlation-based target recognition performance.

3B. Pickup of the 3-D Volumetric Reference Object and Its Reconstruction

In the experiment, a 3-D volumetric reference object is assumed to be the car object of Car 2 and a set of EIA of the reference object of Car 2 is picked up along the output plane in the proposed method. That is, the 3-D reference object of Car 2 is located at zpref=3,6,9,30mm in front of the pinhole array and then sequentially picked up its corresponding EIA at each location of zpref.

The incremental step of each location of the 3-D reference object could be set arbitrarily, but in this paper this incremental step is assumed to be 3mm which is equal to the distance g between the pinhole array and pickup plane. Figure 7 shows the top and side views of the reference 3-D volumetric object of Car 2 located in front of the pinhole array and it has a length of 15mm.

In this paper, a set of EIAs of the reference 3-D object Car 2 is also computationally picked up by sequentially locating the car object Car 2 at each plane of zpref=3,6,9,30mm. That is, a set of EIAs of the reference object with 1600 by 1600 pixels is digitally synthesized, in which the size of the virtual pinhole array and the size of each elemental image is assumed to have 40 by 40 pinholes and 40 by 40 pixels, respectively. The distance between the virtual pinhole array and the elemental image plane is also assumed to be 3mm just like in the pickup case of the target objects.

Resolution of the reference object image of Car 2 is given by 146×89 pixels, and its center location in the image plane of 1600 by 1600 is set to be (0, 0). Figure 8 shows the 10 kinds of EIAs for the reference object of Car 2 picked up at each depth plane of zpref=3,6,9,30mm.

From these 10 kinds of picked-up EIAs, we can reconstruct POIs of the reference 3-D object along the output plane by using the CIIR algorithm. As the reconstructed image planes can be differently taken by increasing the distance from the pinhole array with a small incremental step of Δz, a discrete set of POIs with different depths data can be generated.

In this paper, the output distance is increased from zrref=3 to 30mm with an incremental step of Δz=3mm just like in the reconstruction of the target POIs, so that 10 kinds of POIs for each picked-up EIA of the reference 3-D object are obtained, which will be used later for making correlations with the reconstructed target POIs.

For example, Fig. 9 shows the reference POIs reconstructed along the output plane with the EIAs picked up from the 3-D reference object of Car 2, which is located at zpref=9 and 18mm, respectively.

As we can see in Fig. 9, there are no clearly focused POIs for the 3-D reference object, because the reference object is not a 2-D plane object but a 3-D volumetric object.For example, Fig. 9(a-3) shows the reference POI reconstructed at the distance of 9mm with the EIA picked up from the reference 3-D object of Car 2, which was located at the position of 9mm in the pickup process, so that only the front image of Car 2 is reconstructed to be focused a little bit, but the object images reconstructed on all the other planes are much blurred as we expect. Figure 9(a-5) also shows the reference POI reconstructed at a distance of 15mm. As you can see in Fig. 9(a-5), the middle image of Car 2 is reconstructed to be focused, but the other parts of reference object reconstructed on the other planes get blurred. In addition, Fig. 9(a-7) shows the reference POI reconstructed at the distance of 21mm. As you can see in Fig. 9(a-7), the back image of Car 2 is reconstructed to be focused, but other parts of the reference object reconstructed on the other planes get blurred.

Furthermore, Fig. 9b shows the reference POIs reconstructed with the EIA picked up from the 3-D volumetric reference object of Car 2, which was located at the position of 18mm in the pickup process. As you can see in Fig. 9b, all results are found to be very similar with those of Fig. 9a.

Figure 10 shows the POIs computationally reconstructed from the same part of the reference 3-D volumetric object of Car 2. As shown in Fig. 10, the front parts of Car 2 are all clearly focused, but the back parts of Car 2 are all severely blurred. Accordingly, it is noted here that the clearly focused image part of each reconstructed reference POIs are different from each other depending on the original position of the 3-D reference object located in the pickup process, even though all these reference POIs reconstructed the same part of the 3-D reference object of Car 2.

Similarly, the blurred image parts of each re constructed reference POIs are also different from each other depending on the original position of the 3-D reference object located in the pickup process, although these reconstructed POIs represent the same part of the reference 3-D object.

In other words, as both of the focused and blurred parts of the reconstructed POIs are different from each other depending on the original location of the reference 3-D object in the pickup process, this unique property of the computationally reconstructed reference POIs can be used for the detection and recognition of 3-D target objects in space.

3C. Correlation-Based Recognition of 3-D Volumetric Target Objects

In the conventional CIIR-based correlation method, the target POIs reconstructed with the CIIR algorithm and the 2-D reference object image are employed in the correlation process for recognition of the target object. Thus, the output plane on which the highest correlation value occurred is potentially regarded as the location of the target object.

But, in case of the 3-D volumetric target object, we cannot computationally reconstruct the target POIs clearly, because with the CIIR algorithm only the POI reconstructed on the output plane, where the object was originally located is clearly focused, whereas the other POIs reconstructed away from these focused planes get blurred.

Therefore in this paper, a 3-D volumetric object is employed as a reference object. That is, a group of the EIA of the 3-D reference object is picked up at each depth location (zpref) corresponding to the multiple distance of g and then with each picked up EIA, a set of reference POIs are reconstructed along the output plane by using the CIIR algorithm. This group of reconstructed reference POIs is used for the correlations with the target POIs.

Initially, with the EIA picked up from the 3-D volumetric target objects, a set of POIs with different depth data can be generated along the output plane by using the CIIR algorithm as explained in Subsection 3A. In this paper, the output plane increases from zrtar=3 to 30mm with an incremental step of Δz=3, so that 10 target POIs are obtained.

Second, a set of EIAs of the 3-D volumetric reference object are captured along the pickup plane as explained in Subsection of 3B. Then, with these picked-up EIAs, a set of reference POIs with different depth information is reconstructed along the output plane. In this paper, 10 EIAs are sequentially picked up at each location of zpref=3,6,9,,30mm and then 10 reference POIs for each picked-up EIA are reconstructed along the output plane by increasing the output distance from zrtar=3 to 30mm with an incremental step of Δz=3mm, so that 100(10×10) kinds of POIs in total are obtained for 10 EIAs.

To quantitatively evaluate the similarity between the POIs of the target and reference image, a normalized cross-correlation (NCC) is employed as a similarity parameter as shown in Eq. (1), where T and S are the POIs for the target and reference object image, respectively. E{} represents a statistical expectation operation:

NCC(S,T)=|TE(T)SE(S)|(TE(T))2(SE(S))22.

Now, the detailed recognition process is as follows. That is, we can get similarity parameters among 10 pairs of the reconstructed POIs for the target and reference objects at each output distance through cross-correlating between them. Therefore, by using Eq. (1), a variation curve of the NCC can be obtained along the output plane.

Figure 11 shows the calculated NCC results performed at the 10 output planes of z=3,6,9,30mm between the computationally reconstructed target and reference POIs. Here, 10 groups of the reference POIs, in which each group is composed of 10 reference POIs reconstructed along the 10 output planes, are sequentially used in the process of cross-correlations with the 10 target POIs.

As shown in Fig. 11, for the cases of cross- correlations between the reference POIs reconstructed from the EIAs, that were captured at the pickup planes of less than 15mm, and the reconstructed target POIs, their overall NCC values are found to be relatively small. Here, the target object of Car 2 was originally located at 18mm, and the reference EIAs were captured out of the pickup planes of less then 15mm, so that the reconstructed reference POIs of the Car 2 might be different from the Car 2 image embedded in the reconstructed target POIs at all output planes, which finally results in a deduction of cross-correlation values between them.

But, as the pick-up distance of the reference object increases, overall NCC values increase as well. Finally, on the pickup plane of 18mm, the overall cross-correlation values are maximized because the target object Car 2 was located at 18mm and the reference object Car 2 is captured at the pickup plane of 18mm. In other words, the target and reference POIs reconstructed along the output plane for the object of Car 2 become theoretically identical to each other.

Accordingly, for the case of the pickup plane of 18mm maximum correlation peaks can be obtained at all output planes. However, practically the correlation values are somewhat reduced in the output planes of less than 15mm due to the overlapping of target images in the reconstructed POIs.

Beyond the pickup plane of 18mm, the overall NCC values get decreased again because the focused and blurred image part of the reconstructed POIs are different from each other depending on the original position of the reference object located in the pickup process as shown in Fig. 9.

At this point, if we compare the hexagon-marked line with the star-marked line, the star-marked line has a smaller value in all regions except at the point of 12mm. That is, correlation result for original position has the largest value but some errors could be occurred. Therefore, if we use the average correlation value for all regions, a more accurate correlation value without error can be extracted, and then accurate position could be detected. Here, it is noted that the target object can be recognized just by comparing between each correlation value obtained at each output plane, but more accurately by comparing between the averaged correlation values that means the average of correlation value for each output plane.

Figure 12b shows the averaged cross-correlation values for when each object of Car 1, Car 2, and Car 3 is used as the reference object and when the target objects Car 1, Car 2, and Car 3 are assumed to be located at distances of 18, 21, and 24mm, respectively. Figure 12a shows the averaged correlation values obtained by using the conventional correlation method, whereas the averaged correlation values obtained by using the proposed correlation method are shown in Fig. 12b. As we can see in Fig. 12a, the target object of Car 1, Car 2, and Car 3 are found to be located at the distances of 21, 27, and 27mm, respectively. But these results turned out to be wrong, which results from the fact that each target object is not a 2-D plane object but a 3-D volumetric object as noted above. Nonetheless, in the proposed method, the exact location values of the 3-D target objects have been accurately obtained as shown in Fig. 12b.

In addition, once the longitudinal positions where the objects are located have been detected, lateral coordinates of the objects can also be obtained through correlations between the reference 3-D object images and the corresponding POIs reconstructed on the plane where the objects were originally located. Figure 13 shows the lateral profiles of the correlation results obtained with the conventional and proposed methods. From the experimental results of Figs. 12, 13, 3-D location coordinates of the target objects of Car 1, Car 2, and Car 3 were finally found to be (320,245,21), (100,102,27), and (293, 278, 27) for the case of the conventional method and (320,245,15), (98,102,18), and (294, 280, 21) for the case of the proposed method. By comparing these results with the original 3-D location coordinate of the target object, it is found that they are exactly the same to each other for the case of the proposed method, whereas they are slightly different from each other for the case of the conventional method.

That is, in the proposed system, 3-D image correlations between two sets of target and reference POIs reconstructed by the CIIR method were carried out, so that 3-D volumetric objects were found to be effectively recognized with this proposed CIIR-based 3-D image correlator. These experimental results will finally confirm the feasibility of the proposed CIIR-based 3-D object recognition system in the practical applications.

Here in this paper, we mainly focused on the feasibility test of the proposed method, but various experiments with general cases of the 3-D volumetric target and reference objects need to be further studied for its practical application in real fields.

4. Conclusion

In this paper, a novel CIIR-based 3-D volumetric target recognition system has been proposed, in which a 3-D reference object image was newly employed in the correlation process with the 3-D target object images. That is, a set of reference POIs computationally reconstructed along the output plane were used for the 3-D volumetric target recognition. In other words, in this paper, a new CIIR-based 3-D volumetric image correlator system has been proposed and its feasibility has been confirmed through a number of experiments.

This research was supported by the Ministry of Knowledge Economy (MKE), Korea, under the Information Technology Research Center (ITRC) support program supervised by the National IT Industry Promotion Agency (NIPA) (NIPA-2009-C1090-0902-0018).

 figure: Fig. 1

Fig. 1 Operational principle of the conventional CIIR algorithm.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 POIs reconstructed along the output plane by using the CIIR algorithm.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Block diagram of the proposed CIIR-based 3-D target recognition system.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Experimental setup for pickup of the EIA of three 3-D target objects: (a) top view, (b) side view.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Picked-up EIA for the 3-D target objects.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Target POIs reconstructed from the picked-up EIA along the output plane: (a) zrtar=3mm, (b) zrtar=6mm, (c) zrtar=9mm, (d) zrtar=12mm, (e) zrtar=15mm, (f) zrtar=18mm, (g) zrtar=21mm, (h) zrtar=24mm, (i) zrtar=27mm, (j) zrtar=30mm.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Experimental setup for pickup of the EIA of the reference 3-D object of Car 2: (a) top view, (b) side view.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Picked-up EIA of the 3-D volumetric reference object of Car 2: (a) zpref=3mm, (b) zpref=6mm, (c) zpref=9mm, (d) zpref=12mm, (e) zpref=15mm, (f) zpref=18mm, (g) zpref=21mm, (h) zpref=24mm, (i) zpref=27mm, (j) zpref=30mm.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Examples of reference POIs reconstructed with the picked-up EIA: (a) reference POIs reconstructed with the EIA picked up at zpref=9mm, (b) Reference POIs reconstructed with the EIA picked up at zpref=18mm.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Reconstructed POIs at same position of reference 3-D object Car 2: (a) zpref=3mm, zrref=6mm, (b) zpref=6mm, zrref=9mm, (c) zpref=9mm, zrref=12mm, (d) zpref=12mm, zrref=15mm, (e) zpref=15mm, zrref=18mm, (f) zpref=18mm, zrref=21mm, (g) zpref=21mm, zrref=24mm, (h) zpref=24mm, zrref=27mm, (i) zpref=27mm, zrref=30mm.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Calculated NCC values between the computationally reconstructed target and reference POIs for Car 2.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Averaged correlation values for three cases of the reference object: (a) conventional method, (b) proposed method.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Correlation outputs for Car 1, Car 2, and Car 3 obtained with the conventional and proposed methods: (a) Conventional method: peak (0.7995) in (320,245,21) for Car 1 (View 1). (b) Conventional method: peak (0.8097) in (100,102,27) for Car 2 (View 2). (c) Conventional method: peak (0.7832) in (293, 278, 27) for Car 3 (View 3). (d) Proposed method: peak (0.9878) in (320,245,15) for Car 1 (View 4). (e) Proposed method: peak (0.9867) in (98,102,18) for Car 2 (View 5). (f) Proposed method: peak (0.9960) in (294, 280, 21) for Car 3 (View 6).

Download Full Size | PDF

1. K. Iizuka, “Welcome to the wonderful world of 3D: Introduction, principles and history,” Opt. Photon. News 17 (7), 42–51 (2006). [CrossRef]  

2. S.-C. Kim and E.-S. Kim, “Performance analysis of stereoscopic three-dimensional projection display systems,” 3D Res. 1, 010101 (2009).

3. S.-C. Kim, P. Sukhbat, and E.-S. Kim, “Generation of three- dimensional integral images from a holographic pattern of 3-D objects,” Appl. Opt. 47, 3901–3908 (2008). [CrossRef]   [PubMed]  

4. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of 3-D objects using a novel look-up table method,” Appl. Opt. 47, D55–D62 (2008). [CrossRef]   [PubMed]  

5. T.-C. Poon and T. Kim, “Optical image recognition of three- dimensional objects,” Appl. Opt. 38, 370–381 (1999). [CrossRef]  

6. B. Javidi and E. Tajahuerce, “Three-dimensional object recognition by use of digital holography,” Opt. Lett. 25, 610–612 (2000). [CrossRef]  

7. Y. Frauel, E. Tajahuerce, M. A. Castro, and B. Javidi, “Distortion-tolerant three-dimensional object recognition with digital holography,” Appl. Opt. 40, 3887–3893 (2001). [CrossRef]  

8. A. Pu, R. Denkewalter, and D. Psaltis, “Real-time vehicle navigation using a holographic memory,” Opt. Eng. 36, 2737–2746 (1997). [CrossRef]  

9. J. Rosen, “Three-dimensional electro-optical correlation,” J. Opt. Soc. Am. A 15, 430–436 (1998). [CrossRef]  

10. J. Rosen, “Three-dimensional joint transform correlator,” Appl. Opt. 37, 7538–7544 (1998). [CrossRef]  

11. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three- dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40, 3318–3325 (2001). [CrossRef]  

12. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]   [PubMed]  

13. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). [CrossRef]   [PubMed]  

14. J. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13, 5116–5126 (2005). [CrossRef]   [PubMed]  

15. B. Javidi, R. Ronce-Diaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef]   [PubMed]  

16. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, andprocessing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]  

17. Y. Kim, K. Hong, and B. Lee, “Recent researches based on integral imaging display method,” 3D Res. 1, 010102 (2009).

18. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999). [CrossRef]  

19. B. Lee, S. Y. Jung, S.-W. Min, and J.-H. Park, “Three- dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26, 1481–1482 (2001). [CrossRef]  

20. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

21. D.-H. Shin, B.-H. Lee, and E.-S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef]   [PubMed]  

22. H. Arimoto and B. Javidi, “Integral three-dimensional imag ing with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]  

23. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional vol umetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]   [PubMed]  

24. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004). [CrossRef]   [PubMed]  

25. D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005). [CrossRef]  

26. H. Yoo and D.-H. Shin, “Improved analysis on the signal property of computational integral imaging system,” Opt. Express 15, 14107–14114 (2007). [CrossRef]   [PubMed]  

27. D.-H. Shin and H. Yoo, “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Opt. Express 16, 8855–8867 (2008). [CrossRef]   [PubMed]  

28. D.-C. Hwang, K.-J. Lee, S.-C. Kim, and E.-S. Kim, “Extraction of location coordinates of 3-D objects from computationally reconstructed integral images basing on a blur metric,” Opt. Express 16, 3623–3635 (2008). [CrossRef]   [PubMed]  

29. G. Li, S.-C. Kim, and E.-S. Kim, “Performance-enhanced 3-D object recognition by use of computational integral imaging with depth data of the picked-up elemental images,” Jpn. J. Appl. Phys. 48, 092401 (2009). [CrossRef]  

30. K.-J. Lee, D.-C. Hwang, S.-C. Kim, and E.-S. Kim, “Blur- metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. 47, 2859–2869 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Operational principle of the conventional CIIR algorithm.
Fig. 2
Fig. 2 POIs reconstructed along the output plane by using the CIIR algorithm.
Fig. 3
Fig. 3 Block diagram of the proposed CIIR-based 3-D target recognition system.
Fig. 4
Fig. 4 Experimental setup for pickup of the EIA of three 3-D target objects: (a) top view, (b) side view.
Fig. 5
Fig. 5 Picked-up EIA for the 3-D target objects.
Fig. 6
Fig. 6 Target POIs reconstructed from the picked-up EIA along the output plane: (a)  z r tar = 3 mm , (b)  z r tar = 6 mm , (c)  z r tar = 9 mm , (d)  z r tar = 12 mm , (e)  z r tar = 15 mm , (f)  z r tar = 18 mm , (g)  z r tar = 21 mm , (h)  z r tar = 24 mm , (i)  z r tar = 27 mm , (j)  z r tar = 30 mm .
Fig. 7
Fig. 7 Experimental setup for pickup of the EIA of the reference 3-D object of Car 2: (a) top view, (b) side view.
Fig. 8
Fig. 8 Picked-up EIA of the 3-D volumetric reference object of Car 2: (a)  z p ref = 3 mm , (b)  z p ref = 6 mm , (c)  z p ref = 9 mm , (d)  z p ref = 12 mm , (e)  z p ref = 15 mm , (f)  z p ref = 18 mm , (g)  z p ref = 21 mm , (h)  z p ref = 24 mm , (i)  z p ref = 27 mm , (j)  z p ref = 30 mm .
Fig. 9
Fig. 9 Examples of reference POIs reconstructed with the picked-up EIA: (a) reference POIs reconstructed with the EIA picked up at z p ref = 9 mm , (b) Reference POIs reconstructed with the EIA picked up at z p ref = 18 mm .
Fig. 10
Fig. 10 Reconstructed POIs at same position of reference 3-D object Car 2: (a)  z p ref = 3 mm , z r ref = 6 mm , (b)  z p ref = 6 mm , z r ref = 9 mm , (c)  z p ref = 9 mm , z r ref = 12 mm , (d)  z p ref = 12 mm , z r ref = 15 mm , (e)  z p ref = 15 mm , z r ref = 18 mm , (f)  z p ref = 18 mm , z r ref = 21 mm , (g)  z p ref = 21 mm , z r ref = 24 mm , (h)  z p ref = 24 mm , z r ref = 27 mm , (i)  z p ref = 27 mm , z r ref = 30 mm .
Fig. 11
Fig. 11 Calculated NCC values between the computationally reconstructed target and reference POIs for Car 2.
Fig. 12
Fig. 12 Averaged correlation values for three cases of the reference object: (a) conventional method, (b) proposed method.
Fig. 13
Fig. 13 Correlation outputs for Car 1, Car 2, and Car 3 obtained with the conventional and proposed methods: (a) Conventional method: peak (0.7995) in ( 320 , 245 , 21 ) for Car 1 (View 1). (b) Conventional method: peak (0.8097) in ( 100 , 102 , 27 ) for Car 2 (View 2). (c) Conventional method: peak (0.7832) in (293, 278, 27) for Car 3 (View 3). (d) Proposed method: peak (0.9878) in ( 320 , 245 , 15 ) for Car 1 (View 4). (e) Proposed method: peak (0.9867) in ( 98 , 102 , 18 ) for Car 2 (View 5). (f) Proposed method: peak (0.9960) in (294, 280, 21) for Car 3 (View 6).

Datasets

Datasets associated with ISP articles are stored in an online database called MIDAS. Clicking a "View" link in an Optica ISP article will launch the ISP software (if installed) and pull the relevant data from MIDAS. Visit MIDAS to browse and download the datasets directly. A package containing the PDF article and full datasets is available in MIDAS for offline viewing.

Questions or Problems? See the ISP FAQ. Already used the ISP software? Take a quick survey to tell us what you think.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

NCC ( S , T ) = | T E ( T ) S E ( S ) | ( T E ( T ) ) 2 ( S E ( S ) ) 2 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.