Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method

Open Access Open Access

Abstract

Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

© 2017 Optical Society of America

1. Introduction

High-temperature combustion exists in a large number of fields such as aerospace and energy production. This phenomenon is also embodied in various devices such as engines, station boilers, and coal gasification rectors. Flame is a combustion phenomenon, and theoretical and experimental research about flame is necessary for designing combustion systems [1–3].

Flame’s light field information can be efficiently recorded by taking photos. However, when the velocity of the exhaust plume is intensely high, the whole combustion field will be instantaneously filled with flame, and complicated high temperature combustion will simultaneously occur. Thus, synchronous acquisition of the optical field information at large projection angles is indispensable. Light field imaging technology can be used at large projection angles to detect this type of spray flame. Nevertheless, the experimental conditions and cost must be considered before light field cameras are applied in experiments. For example, since flame temperature can be tremendously high and the state is unsteady, direct application of the light field camera for capturing the flame data is inadvisable. In particular, the lens could be ablated because of the short distance between fire and camera, or frequently changing experimental conditions could increase the cost of the experiment. On the other hand, numerical simulation can be used to separately consider the environmental temperature, illumination conditions, medium type, and other impact factors. Using the numerical simulation results, light field imaging can be obtained in ideal experimental conditions, and the above mentioned problems will be avoided.

In recent years, researchers have observed flame using light field camera, making significant progress. Chen et al. [4] utilized the light field imaging technique to study complex three-dimensional engine in-cylinder processes, and direct visualization via optical diagnostics enabled insight into in-cylinder mixing processes, such as liquid fuel spray atomization. Bolan et al. [5] presented a three-dimensional (3D) deconvolution method that enhanced the spatial resolution of refocused two-dimensional (2D) images in environments possessing a significant out-of-plane signal from a flame or other translucent source. However, most researchers have concentrated on solving resolution and precision problems or optimizing the parameters of the light field camera by simulating the light path inside the camera. Su et al. [6] calibrated the arrangement of the microlens array(MLA) by acquiring data from a standard calibration image. Lu et al. [7] proposed an iterative algorithm by combining a high-resolution image from a standard camera with a low-resolution wave front measurement, improving both the lateral and axial resolution. Shroff et al. [8] analyzed paraxial imaging and reconstructed high-resolution images. Liang et al. [9] established a light transport framework model for analyzing factors limiting the resolution of the light field camera. Zhang et al. [10] simulated a plenoptic camera model using the ray tracing method, which is used to emulate how objects at different depths are imaged using a plenoptic camera. Gregory et al. [11] proposed a ray tracing process for reconstructing 3D objects using plenoptic images that can be used to verify reconstruction algorithms.

However, previous studies have not focused on numerical simulation analysis of the complete physical process between the detected objects and the imaging lens. The duration, conditions, and cost of the experiment can be predicted by introducing integrated light path emulation. In this way, the targeted experimental design can be improved. Moreover, light path simulation can be used to analyze whether the experimental conditions are reliable prior to performing the experiment in order to avoid mistakes caused by ignoring a certain condition. After the experiment, the results can be compared with the simulation to verify correctness. Therefore, establishing a total physical model based on the transmission of light field imaging light rays is indispensable for simulating calculations.

Our research group has already developed physical simulations based on the light ray propagation process of light field imaging. Liu et al. [12] constructed a physical model of a light field camera based on the Monte Carlo (MC) method to simulate the imaging and refocusing processes of the light field camera. The method, considering the light propagation, is based on the ray energy logicalization hypothesis [13]. Then, Yuan et al. [14] analyzed the light field camera imaging of participating media, along with refocused and sub-aperture images. The simulated camera used in the above works differs in accuracy and resolution from an actual camera because of the structural differences of the MLA in the simulated light field camera model.

In this study, the structural design of the simulated camera is optimized by hexagonally arranging the MLA with MC code. Moreover, using the concept of a multi-focused plenoptic camera that incorporates a MLA with different focal lengths and identical aperture [15], the optimized MLA in our study has an extended depth of field (DOF) and a maximal effective resolution. After inputting the parameters of the Raytrix camera, simulated images were obtained and verified using images acquired directly by a Raytrix camera. Finally, an ellipsoidal layered static flame is simulated according to the configured parameters of Raytrix camera.

2. Structure and parameters of the plenoptic camera

Ng et al. [16] proposed the concept of light field imaging based on the surface radiation mechanism. Unlike traditional camera, plenoptic camera (plenoptic1.0) uses a large number of microlenses between the main lens and photosensor. The photosensor is used to receive light intensity from a certain direction of each microlens and preserve the multi-angle light information [17]. Therefore, 3D models can be reconstructed utilizing light field imaging [18]. The multi-focused plenoptic camera (plenoptic2.0) is then proposed [19, 20], the microlenses are set to be focused on the image created by the main lens, not on the main lens. This imaging model places the MLA at an anteroposterior position to the focal plane in a traditional camera and positions the image sensor behind the MLA. This method allows light beams from different directions and specific depths on the object plane to focus ahead of time with secondary focusing on the image sensor. Georgiev et al. [21] designed a focused plenoptic camera based on multi-focused microlenses, the characteristic of which lies in using an interlaced MLA with different focal lengths to focus incident light rays from two or more object planes, thus extending the DOF.

The multi-focused plenoptic camera is introduced into the previously designed light field model [12]. Note that the virtual plane of the main lens in plenoptic 2.0 is an ideal analytical pattern, which is proposed for analyzing the light path. However, this study uses a physical model in which only actual planes exist, while virtual planes not count in the simulation. Accordingly, the MLA needs to be placed in a position corresponding to that of an actual camera in order to achieve the same physical results.

Similar to previously constructed light field camera models, this study utilizes the Monte Carlo method to study the light path. Figure 1 shows the camera structure model (not drawn to scale). To obtain the image acquired by the simulated light field model, a light ray transmitted model should be constructed from the objects of interest to the camera, which includes the main lens, MLA, and photosensor (CCD). The yellow and orange lines demonstrate the light rays coming through the main lens and MLA, finally arrive at the CCD sensor.

 figure: Fig. 1

Fig. 1 Camera structure model (not drawn to scale).

Download Full Size | PDF

2.1 MLA structure

For improving the utilization of the CCD sensor, we use a hexagonal arrangement of microlenses (Fig. 2), similar to that of a commercial light field camera. The microlenses are positioned at the vertices and center of a regular hexagon. As illustrated in Fig. 2, the distance between the centers of adjacent microlenses 1 and 2 is L μm. Here, θ is the included angle between the vertical lines and the center lines of microlenses 1 and 3, and the center distance between microlenses 1 and 3 along the y-axis is Ly = Lcosθ. The number of pixels of the CCD is set to be H × V with pixel size p μm. The number of microlenses in the z-direction is m, and the number of microlenses in the y-direction is n. Compared with an orthogonal MLA, the hexagonal arrangement can assign more microlenses under the same CCD sensor condition for the number of pixels H × V. In the z-direction, the number of orthogonal and hexagonal microlenses is the same: mh=mo=V×p/L. However, in the y-direction, the hexagonal arrangement can accommodate nh=V×p/Ly microlenses, while the orthogonal permutation can merely accommodate no=V×p/L microlenses. Therefore, under the same condition, the total MLA ratio of the hexagonal and orthogonal array is η=mh×nhmo×no=1cosθ. According to the character of the hexagon, θ = 30°, thusη=1cos30°=1.1547, i.e., the number of microlenses in the hexagonal arrangement is 1.1547 more microlenses than in the orthogonal arrangement. Moreover, the hexagonal MLA can capture more parts than the orthogonal array [Fig. 2(a)]. In Fig. 2(a), blue circles represent the hexagonal array, while yellow circles indicate the orthogonal array. The area of CCD sensor behind each microlens is square with side lengths equal to the diameter of the microlens. However, the extra area of the circular microlens not been used in the square field. If the microlenses are organized hexagonally, some part of the area can be taken advantage of as shown in black shade lines in Fig. 2(a). Therefore, by assigning more microlenses, the number of light rays going through the microlens increases, improving the utilization of the CCD sensor and recording more effective light field information.

 figure: Fig. 2

Fig. 2 Microlens configuration diagram.

Download Full Size | PDF

2.2. Realization of multi-focused plenoptic camera

If the focal length of a microlens is given, the DOF has a corresponding value, the relation between them will be discussed in the following parts. Similarly, if the microlens categories increased, focal length and joint DOF of each microlens will also increase. This method can be used to resolve a short DOF for a given value while extending the camera's shooting range. Figure 3 demonstrates an arrangement of three types of microlenses in which each type of microlens is isometric with the two adjacent microlenses, i.e., around microlens number 1, there are three neighboring microlens number 2 interspersed by three microlens number 3 that are equally distant from the center of microlens number 1.

 figure: Fig. 3

Fig. 3 Pattern diagram for three types of lens arrays.

Download Full Size | PDF

The spherical lens focal length can be calculated as follows:

1f=(n1)[1R11R2+(n1)lnR1R2]

The focal length of the microlenses can be modified by transforming three parameters: the refractive index n, radius of curvature of microlens R1/R2, or thickness of microlens l. The relationship between the focal length and three parameters in formula (1) is depicted in Fig. 4. The refractive index can be modified by changing the materials inside microlenses, such as the liquid crystal material [22]. On the other hand, increasing the radius of curvature of the microlens will lead to interference among the microlenses, or decreasing the radius of curvature of the microlenses will result in transmission reduction. The radius of curvature is the easiest quality to alter among the three parameters, the major choice modifying the focal length is the radius of curvature of microlens, such as thermal reflow method [23] etc. since the thickness of microlens is a bit thin, conversion will cause manufacturing difficulty and error increase. Moreover, thickness is linearly dependent with the reciprocal of the focal length, whereas the focal length is the square of the refractive index. Thus, we choose to modify the refractive index to extend variation range of focal length for simulating the multi-focused plenoptic camera.

 figure: Fig. 4

Fig. 4 The influence of different parameters on the microlens focal length.

Download Full Size | PDF

Utilizing the method of arranging the focal length relationship among three types of microlenses [15], we separately obtain the focal length of the three types of microlenses using the following parameters (Table 1):

According to the research [15],

The relationship between the main lens and the MLA is given as

BD=BLBDLN=NLBDL

The left and right boundary of microlens DOF are given, respectively, as

a0=[1f1B(1pD)]1
a0+=[1f1B(1+pD)]1

Thus, the DOF is

a0=|a0+a0|

According to these conditions, the DOF of the three types of microlenses will meet, and the focal length is f1, f2, and f3.

The left boundary of the first microlens type is equal to the right boundary of the second type:

a0+(f2)=a0(f1)

The left boundary of the second microlens type is equal to the right boundary of the third type:

a0+(f3)=a0(f2)

The surfaces on both sides of the main lens are spherical with radius of curvature R = 0.3m, lens thickness along the optical axis lL = 24.26mm, main lens diameter DL = 120mm, and main lens focal length fL = 304.0987mm. Then, the corresponding value of the minimum aperture NL = 2.710.

The spacing between the centers of the microlenses is L = 0.1mm, the microlens diameter is D = 0.1mm, the spherical radius of both the front and rear sides of the microlenses is R1 = R2 = 0.225mm, the micro-lens thickness is l = 0.05mm, and the refractive index of the microlens is n1 = 1.50. Using these microlens parameters, the focal length of the microlens is calculated to be f1 = 0.233mm using Eq. (1).

From the focal length, the minimum aperture value N = 2.534. Therefore, in the case where the MLA lies outside the focal length of the main lens, the F-numbers of the microlens and the main lens satisfy Eq. (2). Rather than using the better imaging of a lens array as the main lens, only a single lens is used in this study. Therefore, to reduce the aberration, the aperture size is usually reduced, which is 100 mm in this example, and the approximate aperture value is NL=3.256.

The CCD sensor pixel size p = 0.01mm, and the number of transverse (H) and longitudinal (V) pixels is H = V = 3000. Thus, each microlens corresponds to m × n = 10 × 10 pixels. Thus B = 0.2534mm. The focal length of the second and third types of microlenses is f2 = 0.197mm and f3 = 0.170mm, respectively. Using Eq. (1), the refractive index of the second and the third type of microlenses can be calculated as n2 = 1.59 and n3 = 1.69, respectively.

Data processing is shown in Table 2. Analyzing the data in Table 2, the DOF a0 ratio of plenoptic2.0 and plenoptic1.0 is 1.955. Thus, the DOF of the three types of interacting microlenses is almost twice than that of a single microlens.

Tables Icon

Table 2. Comparison between plenoptic2.0 and plenoptic1.0

2.3. Subaperture imaging

With an actual camera, post-processing of the acquired images occurs by means of image reformation on the CCD sensor in order to derive subaperture images of the main lens. However, the MLA is hexagonal, while the pixels used in the output image format could only be square. Therefore, the subaperture image cannot be used to simply recombine the image on the CCD sensor. We reallocate the energy of light, making the subaperture images more similar to the object itself, before finally increasing the image resolution. Figure 5 embodies the weighted average of the gray level using bilinear interpolation [24], from which an average distributed energy subaperture can be obtained.

 figure: Fig. 5

Fig. 5 The principle of subaperture processing.

Download Full Size | PDF

In bilinear interpolation theory, for an objective pixel, the floating point coordinate determined via reciprocal transformation is (i + u, j + v), where i and j are the integer part of the floating point coordinates and u and v are the decimal part of the floating point coordinates with an interval of [0,1). Thus, the pixel value f (i + u, j + v) can be derived from the four corresponding four pixels around the objective pixel, whose coordinates are (i, j), (i + 1, j), (i, j + 1), and (i + 1, j + 1):

f(i+u,j+v)=(1u)×(1v)×f(i,j)+(1u)×v×f(i,j+1)+u×(1v)×f(i+1,j)+u×v×f(i+1,j+1)
Using the bilinear interpolation given above, the floating point coordinates set via reciprocal transformation are (i, j), where i and j are the serial number of the microlens in the subaperture. Therefore, the gray value of pixel f (i, j) can be decided by the gray value of the four neighboring pixels and itself.

Using the bilinear interpolation calculation method of the gray value, we find

f(i,j)=(2×cosθ1)×f(i,j)+1cosθ2×f(i,jcosθ)+1cosθ2×f(i,j+cosθ)+1cosθ2×f(i1,jcosθ)+1cosθ2×f(i1,j+cosθ)

When θ = 30°, the equation above can be simplified:

f(i,j)=14×[f(i,j12)+f(i,j+12)+f(i1,j12)+f(i1,j+12)]

3. Simulated image results

All the calculation was performed using MC method, on a 2.30GHz Intel(R) Xeon(R) E5-2650v3 processor. To improving the calculation efficiency, 20-thread parallel computing was utilized with each thread calculating 1 × 109 light rays, each ray transmitting 10 nW of energy, and 2 × 1010 total rays. The total calculating time was 10 hours. If the energy received by the pixel exceeds 5000 nW, the point is thus saturated. Using parallel computing can improve the efficiency and precision of the simulation over that of a single thread calculation, in which the calculating time is tremendously long. In the process, the light rays are generated by random number subroutines. If a thread is completed, the program automatically skips 1 × 109 random numbers on the next thread calculation to ensure the randomness of the light calculation.

To verify the feasibility of the simulated plenoptic camera model, we first simulate the light field camera imaging of surface reflected light. In the coordinate system demonstrated in Fig. 1, a sphere, cylinder, and cone were placed about 5 m from the main lens. A level surface and a point light source radiating from the upper left corner were added to strengthen the stereoscopic vision. The source coordinates were (x0,y0,z0) = (0,5,5). The launch centerline direction point to a point on the x-axis was (x,y,z) = (5,0,0), and the illumination beam spread was approximately 22.37°. The surfaces of the objects offer diffuse reflection with a reflectivity of 1.0, while the level surface provides diffuse reflection with reflectivity at 22.37°.

The image in Fig. 6 is obtained without a MLA. Figure 7 shows the imaging results of previous work, in which the number of microlenses is 60 × 60 and the pixel size of CCD is 100mm. Figure 8 shows the optimized image from this work. The detailed analysis of these images follows.

 figure: Fig. 6

Fig. 6 Diffusive reflective surfaces.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Light spot on CCD by microlens (previous work [12]).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Light spot on CCD by microlens (this work).

Download Full Size | PDF

As illustrated in Fig. 6, part of the shadow of cylinder and cone can be observed. Although the rays are averagely irradiated on the three objects, the luminance of cone is more conspicuous than the other two objects. We believe this might occur because the light originates at the upper left corner, and the light incident zenith angle of some part of cone surface is small. Considering the cosine effect, a large amount of incline light rays in a given unit area can improve the brightness.

The images obtained with the MLAs in Figs. 7 and 8 are obtained under the same parameters and positions. Each microlens has a corresponding facula on the CCD sensor. The facula can be regarded as an image captured by microlens via the main lens, i.e., the shapes of the facula keep track of considerable information. The brightness differs when observing the main lens from the homologous microlens position. As shown in the enlargements of Fig. 8, the imaging of three kinds of microlenses is different. Microlens imaging is distinct within the scope of the focal length, while the other two types of imaging on the CCD sensor are piebald and vague. As a comparison, Fig. 9 shows differences among the three types of MLA, which captured by multiple focus plenoptic camera. The object is only in focus in one of the three kinds of MLA.

 figure: Fig. 9

Fig. 9 three types of microlens captured by Raytrix camera [19].

Download Full Size | PDF

From the results above, the previous work and optimized results are evidently different. The optimized image is more similar to the image acquired without the MLA, and the resolution is also higher than that of the previous work. Thus, the subaperture image can be acquired using the relevant image rendering algorithm.

Figures 10(a) and 10(b), respectively, are the subaperture images of plenoptic1.0 and plenotic2.0. The images are similar in that the luminance of objects captured from different subaperture positions show a distinct transition. In the cylinder, for example, the brightest part of subaperture images changes from left to right. On the left side, the brightest parts of the cylinder are almost at left; while on the right side, the brightest parts of cylinder are at middle. Moreover, the height of level surface gradually increased from the top subaperture images to the lower subaperture images, which results from a lower visual angle of the subaperture in the main lens. On the other hand, on the edge of the subaperture image, Fig. 10(b) is brighter than Fig. 10(a) because the three kinds of microlenses can record light information over a bigger angle range. In addition, the hexagonal arrangement of microlenses can increase the light utilization. Moreover, the number of microlenses in plenoptic2.0 is one third of those in plenoptic1.0. The other microlens are out of the focal length, so the subaperture captured by plenoptic2.0 is comparatively dull.

 figure: Fig. 10

Fig. 10 Subaperture images from different camera types.

Download Full Size | PDF

4. Raytrix camera simulation

4.1. Simulated Raytrix camera image

To verify the correctness of the simulation, we adjust the model according to the device parameters of a commercial light field camera. The calibrated parameters of the commercial camera used in this study (Raytrix R29, RGB [25]) are given in Table 3. The parameters are used to simulate an actual R29 camera and capture the image. In accordance with the definition in section 2.1, the CCD pixel numbers are given as follows: transverse number H = 6576 and longitudinal number V = 4384. Since the gray-scale map depicts the brightness information of energy distribution, i.e., the color information of RGB can be ignored, the pixel number is reduced to 1/3 of the original value, and the transverse and longitudinal number are reduced to 1/3 of their original value, or about H’ = 3796.77 and V’ = 2531.18. Moreover, L = 165μm and p = 5.5μm, so subaperture number recording directional information is (L/p)2 = 900. After integral multiple adjustment of (L/p)2, the transverse and longitudinal numbers used in the model calculation are H = 3780 and V = 2550, respectively. Accordingly, the microlens calibration numbers of m0 = 207 and n0 = 160 are modified to m = 126 and n = 85, where m × n represents the pixel number occupied by each subaperture.

Tables Icon

Table 3. Camera module specification [25].

Figure 11(a) demonstrates the simulated result using the R29 camera. To shorten experimental distance, three objects are moved close to the main lens. The ball, cylinder and cone are respectively placed 1.8m, 1.5m, and 1.3m from the main lens. In contrast with the actual R29 picture [Fig. 11(c)], the simulated result is more obscure than actual image because of the differences in the pixel numbers between the two cameras. Moreover, the image recorded by the CCD sensor in Fig. 11(a) depicts that the object orient-light side is brighter than the back-light side, thus the source light can be directly judged, but the actual camera image does not show this phenomenon. This phenomenon may occur because the simulated model sets the light orientation position and light rays act as non-paralleled light; however, the light environment for the image taken with an actual camera is parallel sunlight.

 figure: Fig. 11

Fig. 11 Simulation of R29 camera and Images acquired by actual cameras.

Download Full Size | PDF

Figure 12 shows the subaperture of the simulated plenoptic camera model. Each subaperture occupied 126(H) × 85(V) pixels after the extraction, and the total number of subapertures is 30 × 30. Compared with the set of subapertures in the previous work (10 × 10), the set of 30 × 30 subaperture images can keep track of more light information. In contrast with the subaperture image in Fig. 10, the subaperture images in Fig. 12 at different positions show little change with inconspicuous view variance and similar brightness. This behavior may occur because the simulated camera incorporates fewer microlenses, while the number of pixels occupied by each microlens increases. Thus, the perspective of the main lens is reduced. Furthermore, differences between each microlens are reduced, and the subaperture images remain almost the same. In Fig. 12, enlargement of the subaperture images emphasizes the distortion. As mentioned in section 2.1, since the center distance between microlenses 1 and 3 along the y-axis is Ly = Lcosθ(Fig. 2), where L = 165μm and θ=30°, we find p = 5.5μm. In theory, the distance between the centers of two rows of microlenses is Ly/p=153, but the pixel number on the CCD sensor can only be set by an integer value. Therefore, calibration offset of microlens centers will occur, which manifests as salient image distortion in the subaperture images.

 figure: Fig. 12

Fig. 12 Subaperture images remapped by simulated camera model.

Download Full Size | PDF

The refocus image can be obtained by shift and superposition of subaperture image [26]. Figure 13 demonstrates the refocus image in which part (a) is refocused at 1.5m (at the cylinder position) and part (b) is refocused at 1.30m (at the cone position). When the refocus distance is 1.5m, the edge of cylinder is clear, while the edge of cone is blurry. However, when the refocus distance is 1.3m, the cylinder turns to be blurry, but the cone seems to be distinct.

 figure: Fig. 13

Fig. 13 Refocus images.

Download Full Size | PDF

4.2. Flame imaging

Using the above study results, we then focused on actual flame model simulation. We simplify a low-velocity flame, e.g., a candle flame in natural convection, which can be regarded to have an ellipsoidal shape.

The simulated shape and dimension of the ellipsoidal flame is shown in Fig. 14. The three centers of the ellipsoidal flame are misaligned. Since the overall size of flame is small, it is place 1m from the main lens to ensure sufficiently high image resolution. Table 4 shows the temperature and radiative parameters of ellipsoidal flame layers. Figure 15 shows the direct image and light field image captured in a previous work (microlens number: 60 × 60, orthogonal [12]). Figure 16 shows the simulated CCD sensor results from this work of static flame at distances of 1.0m and 0.3m.

 figure: Fig. 14

Fig. 14 Schematic of an ellipsoidal stratified flame.

Download Full Size | PDF

Tables Icon

Table 4. Temperature and radiative parameters of ellipsoidal flame layers [14].

 figure: Fig. 15

Fig. 15 Images of an ellipsoidal layered flame [12].

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Images of an ellipsoidal layered flame using the model presented in this work.

Download Full Size | PDF

Comparing Figs. 15 and 16, the stratified flame can be obviously distinguished. Moreover, because of the increase in the microlens number and the augmented visual angle, the flame can be placed closer to the main lens (0.3m). As a result, the CCD sensor can obtain a saturated image. Moreover, as demonstrated in Fig. 15(b), the facula on the CCD sensor occupied by each microlens is small, i.e., pixel utilization on the CCD sensor is quite adverse. However, the faculae of the microlens in Fig. 16 are larger, allow more light information to be recorded, and are thus more accurate.

5. Conclusions

To detect light field parameters in complicated conditions, a viable light field camera imaging model was developed by analyzing the light field camera imaging mechanism. The proposed model can simulate static issues, flame, and even the light path inside the camera. The following conclusions can be obtained.

  • 1. The MLA is modified into a hexagonal arrangement, which results in 15.47% utilization improvement of the CCD sensor. The MLA is divided into three depths of field according to three different focal lengths. The bilinear interpolation method is used to produce a uniform energy distribution in the subaperture images.
  • 2. By adjusting the microlens F-number, microlens number, and microlens area distribution, the resolution is improved to 9 million pixels. After extracting the 3000 × 3000 pixels on the CCD sensor, a 10 × 10 subaperture is obtained with a size of 30 × 30 pixels. This change makes the CCD and subaperture image less distinct.
  • 3. This study amends the parameters of the numerically simulated plenoptic camera model, extracting 3780(H) × 2550(V) CCD sensor pixels into a subaperture of 126(H) × 85(V), the number of which is 30 × 30. Then, we simulate images similar to the results of an actual experiment, verifying the correctness of our model. Further ellipsoid static flame stratified imaging can be simulated by placing the flame 1.0 m from the main lens. Compared with previous work [12], the optimized model of this study produced higher resolution imaging.

The optimization of the MLA and the improved resolution guarantee the correctness of the numerically simulated plenoptic camera model. Thus, matching with an actual camera such as the Raytrix R29 can be acquired as foundation for future studies.

Funding

National Natural Science Foundation of China (NSFC) (51327803, 51406041); China Postdoctoral Science Special Foundation (2015T80347)

Acknowledgments

We would like to specially acknowledge the editors and referees who made important comments that helped us to improve this paper.

References and links

1. A. K. Sehra and W. Whitlow Jr., “Propulsion and power for 21st century aviation,” Prog. Aerosp. Sci. 40(4-5), 199–235 (2004). [CrossRef]  

2. G. Richards, M. McMillian, R. Gemmen, W. A. Rogers, and S. Cully, “Issues for low-emission, fuel-flexible power systems,” Pror. Energy Combust. Sci. 27(2), 141–169 (2001). [CrossRef]  

3. B. Zhang, C.-L. Xu, and S.-M. Wang, “Generalized source finite volume method for radiative transfer equation in participating media,” J. Quant. Spectrosc. Radiat. Transf. 189, 189–197 (2017). [CrossRef]  

4. H. Chen, P. M. Lillo, and V. Sick, “Three-dimensional spray-flow interaction in a spark-ignition direct-injection engine,” Int. J. Engine Res. 17(1), 129–138 (2015). [CrossRef]  

5. J. J. T. Bolan, K. C. Johnson, and B. S. Thurow, “Enhanced imaging of reacting flows using 3D deconvolution and a plenoptic camera,” in AIAA SciTech Forum,53rd AIAA Aerospace Sciences Meeting (2015), paper 2015–0532. [CrossRef]  

6. L. Su, Q. Yan, J. Cao, and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22–27 (2016). [CrossRef]  

7. C.-H. Lu, S. Muenzel, and J. Fleischer, “High-resolution light-field microscopy,” in Imaging and Applied Optics (Optical Society of America, 2013), paper CTh3B.2.

8. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52(10), D22–D31 (2013). [CrossRef]   [PubMed]  

9. C.-K. Liang and R. Ramamoorthi, “A light transport framework for lenslet light field cameras,” ACM Trans. Graph. 34(2), 1–19 (2015). [CrossRef]  

10. R. Zhang, P. Liu, D. Liu, and G. Su, “Reconstruction of refocusing and all-in-focus images based on forward simulation model of plenoptic camera,” Opt. Commun. 357, 1–6 (2015). [CrossRef]  

11. J. Schwiegerling, “Plenoptic camera image simulation for reconstruction algorithm verification,” Proc. SPIE 9193, 178–182 (2014).

12. B. Liu, Y. Yuan, S. Li, Y. Shuai, and H. P. Tan, “Simulation of light-field camera imaging based on ray splitting Monte Carlo method,” Opt. Commun. 355, 15–26 (2015). [CrossRef]  

13. M. Premuda, E. Palazzi, F. Ravegnani, D. Bortoli, S. Masieri, and G. Giovanelli, “MOCRA: a Monte Carlo code for the simulation of radiative transfer in the atmosphere,” Opt. Express 20(7), 7973–7993 (2012). [CrossRef]   [PubMed]  

14. Y. Yuan, B. Liu, S. Li, and H.-P. Tan, “Light-field-camera imaging simulation of participatory media using Monte Carlo method,” Int. J. Heat Mass Transfer 102, 518–527 (2016). [CrossRef]  

15. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012). [CrossRef]  

16. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Tech. Rep. 2, 1–11 (2005).

17. T. Georgiev and C. Intwala, “Light field camera design for integral view photography,” Adobe System, Inc., Technical Report (2006).

18. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH (1996), 31–42.

19. U. Perwass and C. Perwass, “Digital imaging system, plenoptic optical device and image data processing method,” US Patent 8,619,177 (2013).

20. A. Lumsdaine and T. Georgiev, “Full resolution lightfield rendering,” Adobe System, Inc., Technical Report (2008).

21. T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012). [CrossRef]  

22. Y. Lei, Q. Tong, X. Zhang, H. Sang, A. Ji, and C. Xie, “An electrically tunable plenoptic camera using a liquid crystal microlens array,” Rev. Sci. Instrum. 86(5), 053101 (2015). [CrossRef]   [PubMed]  

23. M.-K. Park, H. J. Lee, J.-S. Park, M. Kim, J. M. Bae, I. Mahmud, and H.-R. Kim, “Design and Fabrication of Multi-Focusing Microlens Array with Different Numerical Apertures by using Thermal Reflow Method,” J. Opt. Soc. Korea 18(1), 71–77 (2014). [CrossRef]  

24. X. Li and M. T. Orchard, “New edge-directed interpolation,” IEEE Trans. Image Process. 10(10), 1521–1527 (2001). [CrossRef]   [PubMed]  

25. J. Sun, C. Xu, B. Zhang, M. M. Hossain, S. Wang, H. Qi, and H. Tan, “Three-dimensional temperature field measurement of flame using a single light field camera,” Opt. Express 24(2), 1118–1132 (2016). [CrossRef]   [PubMed]  

26. Z. L. Zhou, “Research on light field imaging technology,” dissertation, University of Science and Tecnology of China, An Hui, China (2012).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1
Fig. 1 Camera structure model (not drawn to scale).
Fig. 2
Fig. 2 Microlens configuration diagram.
Fig. 3
Fig. 3 Pattern diagram for three types of lens arrays.
Fig. 4
Fig. 4 The influence of different parameters on the microlens focal length.
Fig. 5
Fig. 5 The principle of subaperture processing.
Fig. 6
Fig. 6 Diffusive reflective surfaces.
Fig. 7
Fig. 7 Light spot on CCD by microlens (previous work [12]).
Fig. 8
Fig. 8 Light spot on CCD by microlens (this work).
Fig. 9
Fig. 9 three types of microlens captured by Raytrix camera [19].
Fig. 10
Fig. 10 Subaperture images from different camera types.
Fig. 11
Fig. 11 Simulation of R29 camera and Images acquired by actual cameras.
Fig. 12
Fig. 12 Subaperture images remapped by simulated camera model.
Fig. 13
Fig. 13 Refocus images.
Fig. 14
Fig. 14 Schematic of an ellipsoidal stratified flame.
Fig. 15
Fig. 15 Images of an ellipsoidal layered flame [12].
Fig. 16
Fig. 16 Images of an ellipsoidal layered flame using the model presented in this work.

Tables (4)

Tables Icon

Table 2 Comparison between plenoptic2.0 and plenoptic1.0

Tables Icon

Table 3 Camera module specification [25].

Tables Icon

Table 4 Temperature and radiative parameters of ellipsoidal flame layers [14].

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

1 f =(n1)[ 1 R 1 1 R 2 + (n1)l n R 1 R 2 ]
B D = B L B D L N= N L B D L
a 0 = [ 1 f 1 B ( 1 p D ) ] 1
a 0 + = [ 1 f 1 B ( 1+ p D ) ] 1
a 0 =| a 0 + a 0 |
a 0 + ( f 2 )= a 0 ( f 1 )
a 0 + ( f 3 )= a 0 ( f 2 )
f( i+u,j+v )=( 1u )×( 1v )×f( i,j )+( 1u )×v×f( i,j+1 )+ u×( 1v )×f( i+1,j )+u×v×f( i+1,j+1 )
f( i,j ) = ( 2×cosθ1 )×f( i,j ) + 1cosθ 2 ×f( i,jcosθ )+ 1cosθ 2 ×f( i,j+cosθ )+ 1cosθ 2 ×f( i1,jcosθ )+ 1cosθ 2 ×f( i1,j+cosθ )
f( i,j ) = 1 4 × [ f( i,j 1 2 )+f( i,j+ 1 2 )+f( i1,j 1 2 )+f( i1,j+ 1 2 ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.