## Abstract

Plenoptic imaging is a 3D imaging technique that has been applied for quantification of 3D particle locations and sizes. This work experimentally evaluates the accuracy and precision of such measurements by investigating a static particle field translated to known displacements. Measured 3D displacement values are determined from sharpness metrics applied to volumetric representations of the particle field created using refocused plenoptic images, corrected using a recently developed calibration technique. Comparison of measured and known displacements for many thousands of particles allows for evaluation of measurement uncertainty. Mean displacement error, as a measure of accuracy, is shown to agree with predicted spatial resolution over the entire measurement domain, indicating robustness of the calibration methods. On the other hand, variation in the error, as a measure of precision, fluctuates as a function of particle depth in the optical direction. Error shows the smallest variation within the predicted depth of field of the plenoptic camera, with a gradual increase outside this range. The quantitative uncertainty values provided here can guide future measurement optimization and will serve as useful metrics for design of improved processing algorithms.

© 2017 Optical Society of America

## 1. Introduction

The increasing prevalence of light field imaging, and specifically plenoptic cameras, as a noninvasive three-dimensional (3D) diagnostic for particle tracking motivates the need for detailed understanding of measurement accuracy. Previous applications of particle tracking using plenoptic imaging have included wind tunnel experiments [1,2], multiphase sprays [3,4], medical imaging [5], natural flow analysis [6], and environmental studies [7]. In these works, quantification of accuracy has been mostly limited to statistical analyses of experimental results [4,8,9] and comparison with alternative measurement techniques [1,10]. Other works have focused on defining and improving imaging performance, such as spatial resolution, point spread functions, and depth of field (DOF) [11–16]. While it seems likely that particle measurement uncertainty is strongly related to these optical performance metrics, this has yet to be experimentally verified.

In this work, a large experimental data set of a static particle field translated to known distances is used to quantify the uncertainty of 3D particle location measurements using a single plenoptic camera. Particular emphasis is placed on determining the accuracy and precision in the out-of-plane direction, as the uncertainty is generally highest in this direction due to the relatively small angular range from which depth is reconstructed.

#### 1.1 Light field imaging

The key benefit of a plenoptic camera is in the capability to create a 3D representation of a scene in post processing from a single, instantaneous raw image. This is achieved using a microlens array placed between the main objective and the image sensor. The microlenses redirect the incoming light rays onto different image sensor locations based on angle of propagation. As such, a plenoptic camera captures not just the spatial distribution of the light rays but also some angular information. This can be computationally manipulated to create images in which the focal plane is shifted from the nominal position and images in which the viewing perspective is altered from the original recorded position [16].

An example application of this technology for single camera, 3D particle measurements is shown in Fig. 1 [4]. Figure 1(a) shows a single raw image of the crown splash created from the impact of a water drop on a thin film of water. The insert details the sub aperture images created by the microlens array. Using the methods defined in [16] this is numerically refocused along the optical depth direction, *z*, with Fig. 1(b) showing two examples. Finally, as detailed in [4], the instantaneous 3D position and size of each particle is automatically quantified using image processing routines. With two such realizations recorded in short succession, particle matching is used to determine the 3D particle positions and three-component velocities shown in Fig. 1(c).

Inspection of the results in Fig. 1(c) appears to show some erroneous vectors in the out-of-plane, *z*-direction where a few individual measured velocities do not follow the overall flow symmetry. In [4] this symmetry is leveraged to provide an initial estimate of overall measurement uncertainty. However, to eventually reduce this uncertainty via improved experiments or processing methods, it is first necessary to study measurement accuracy and precision in detail. This is done here with a more controlled particle field intended to replicate the basic features of the application shown in Fig. 1.

#### 1.2 Theoretical limitations

Measurement accuracy is hypothesized to be strongly correlated with the spatial resolution and depth of field of the plenoptic camera. Due to the use of the microlens array and the unique post-processing methods, the definition of these metrics is slightly more complex compared to traditional photography.

Figure 2 illustrates a simple plenoptic camera in which five pixels lie behind each microlens. The color bands in the top schematic illustrate the angular regions captured by each individual pixel. As detailed in [16], a numerically refocused image is calculated by integrating all of the light which originates from the plane of interest into spatial regions discretized by the microlenses. For example, refocusing to the nominal focal plane of the main lens aperture sums the intensity of each of the illustrated pixels to determine the refocused intensity at the center of the image. The theoretical depth of field of such a numerically refocused image at the nominal focal plane is derived by Deem et al. [15] as,

*f*is the focal length of the main lens,

*M*is the nominal magnification,

*f*is the focal length of the microlenses, and

_{µ}*N*is the number of image sensor pixels behind each microlens in one dimension (calculated as the microlens pitch,

*p*, divided by the pixel pitch,

_{μ}*p*). In this work, particle depths are determined from the sharpness of numerically refocused images. Therefore, it is hypothesized that Eq. (1) will bound the measured particle depth precision, and this quantity is referred to as the theoretical depth resolution, Δ

_{p}*z*.

In-plane resolution of plenoptic imaging is also examined by Deem et al. [15] using ray-tracing of the light incident on a microlens. Since all light rays incident on a particular microlens reach the same image sensor pixels, any light rays emanating from the corresponding object space location have indistinguishable spatial locations in that dimension [15]. The size of this object space location is determined by the nominal magnification of the system and the microlens pitch,

In this work, in-plane particle positions are measured from the refocused images with Eq. (2) defining the theoretical bounds of precision.Finally, it is important to quantify the range of *z*-depths over which measurements are expected to remain accurate. A numerically refocused image can only be rendered as sharp as the sub-images from which it is determined [16]. These sub-images are discretized by the individual pixels, and, as illustrated in the bottom row of Fig. 2, the small effective aperture results in a relatively large effective depth of field. This is calculated by first determining the diameter of the *N* times smaller aperture, *D = p _{p}·l_{i}/f_{μ}*, where

*l*is the image distance of the main lens. Given additionally that the size of the circle of confusion is equivalent to the in-plane resolution calculated in Eq. (2), the depth of field of the sub-images can be determined from a standard depth of field equation as,

_{i}*l*is the object distance of the main lens. From this equation we can also define the near and far limits of the depth of field,

_{o}*z*and

_{N}*z*respectively as,

_{F}*z*≤

_{N}*z*≤

*z*is expected to come into sharp focus at its corresponding optical depth. Therefore, Eq. (4) defines the theoretical range of optical depths over which particle measurements should remain accurate. The current work focuses on macroscopic applications of plenoptic imaging in which the microlenses and pixels are larger than the relevant diffraction limited spot size. This is contrasted with microscopic applications, such as those discussed by Truscott et al. [8], Levoy [17], and Pepe et al. [14] where spatial resolution and depth of field may be better predicted by diffraction limited models. While the methods provided here may guide future investigations of the uncertainty of diffraction limited applications, the detailed quantitative results are likely to differ.

_{F}In summary, it is hypothesized that the precision of individual particle measurements in the out-of-plane, *z*-direction and the in-plane, *x-* and *y-*directions will correlate with Eqs. (1) and (2), respectively. In addition, the total depth range over which measurements will remain accurate is bounded by Eq. (4). In the remaining sections, experiments are presented which attempt to confirm this theory.

## 2. Experimental configuration

In previous work focused on experimental uncertainty quantification of related measurement techniques, well-defined 3D calibration fields have been created using neutrally buoyant particles immersed in liquids [18] or objects fixed on glass slides [19]. While similar techniques could be employed here, the refraction of light through material interfaces combined with the unique manner in which plenoptic cameras utilize angular information, would result in image aberrations that must be corrected. To avoid this, the current work focuses on a particle field in which all recorded light rays pass through a common index of refraction medium, namely air. This is achieved using the experimental configuration photographed in Fig. 3.

A rigid particle field is simulated by the heads of straight pins (roughly 2.4 mm diameter) inserted at random orientations into a rigid foam ball (approximately 115 mm diameter). To reduce reflections and increase the ease with which the pin heads can be segmented from the background, the shafts of the straight pins are painted white. Finally, the scene is illuminated by three continuous green LEDs with diffusers placed to reduce shadows created by the pins.

The plenoptic camera used in this work was constructed by the Advanced Flow Diagnostics Laboratory (AFDL) at Auburn University using an Imperx Bobcat B6620 29 MP camera, which has a Truesense KAI-29050 CCD image sensor (6600 × 4400 pixels, *p _{p}* = 5.5 μm pixel pitch). The microlens array has 471 × 362 hexagonally arranged microlenses each with a pitch of

*p*= 77 μm. This array is placed approximately one microlens focal length,

_{μ}*f*= 308 μm, from the image sensor, using a custom mount designed by the AFDL. Finally, for all the results reported here the main lens had a focal length of

_{μ}*f*= 105 mm.

Precisely known displacements along the optical depth direction, *z*, were created by affixing the particle field to an automatic translation stage as shown in Fig. 3. The stage had a total travel distance of 50 mm with an absolute accuracy of ± 4.5 µm. The plenoptic camera was positioned such that the optical axis was parallel to the translation axis.

To explore the scaling of measurement accuracy with Eqs. (1)-(4) data was collected at four different magnifications of 0.25, 0.35, 0.5, and 0.75 achieved by adjusting the focus of the main lens and the physical separation between the camera and the particle field. Table 1 summarizes the experimental configurations along with the theoretical depth of field, depth resolution, and in-plane resolution.

For some conditions in Table 1, the predicted DOF, over which measurements are expected to retain their accuracy, exceeds the 50 mm travel distance of the translation stage. To extend the experimental range, data was collected with the camera at three different distances from the particle field as shown in Fig. 4, In the middle configuration, the particle field was centered at the nominal focal plane, in the near configuration it was centered 50 mm closer to the camera, and in the far configuration it was centered 50 mm farther from the camera. This resulted in a total of 12 configurations of magnification and depth.

In an experiment, the particle field was translated over its full range with images captured every 1 mm. Following this, the pins were randomly repositioned to ensure independent data sets, and the experiment was repeated 50 times. This process was performed for every magnification and every offset, resulting in over 30000 raw images. Additionally, in each configuration, a set of dot card images was collected so that volumetric calibration could be applied.

#### 2.1 Data processing for particle localization

Each individual image was processed to determine the 3D particle locations and in-plane sizes using a method analogous to the processing of Fig. 1 [4]. To reduce the effects of lens aberrations, a 3D image dewarping was first determined using the dot card images and the methods defined in [15]. Next, each raw particle field image was numerically refocused to 500 evenly spaced *z*-planes spanning a range of 80 mm in the depth direction. This “focal stack” was then dewarped into global coordinate space with the aforementioned transformation. Following this, particle locations were measured using a modified version of the hybrid particle detection method developed by Guildenbecher et al. [20] which determines particle location based on a combination of minimum intensity and maximum edge sharpness. This method was originally designed for application to holography and was first applied to plenoptic data in [4]. Finally, nearest neighbor matching was used to identify the corresponding particles from images recorded after translation.

Due to the large volume of data collected, a computer cluster was used to process the results within a reasonable timeframe. Processing of a single image using one core of this cluster requires approximately 5 hours. In a typical run, 256 cores were used simultaneously to enable processing of approximately 1200 images in one day. This time may be reduced significantly in the future with optimized particle location methods or implementation in a more efficient programming language, however that is beyond the scope of the current work.

## 3. Results and discussion

The following results and analysis are based on error in the displacements measured from the computational manipulation of the plenoptic images. To provide context for these results, example images from two of the configurations are included here. Figure 5 displays three different focal planes from an image with a magnification of 0.5 Fig. 6 displays the same three planes, relative to the focal plane, from an image with a magnification of 0.25. As a reminder, each of these sets of three images is created from a single instantaneous raw image; therefore, the apparent translation through space is a computational effect, not physical movement of the experimental apparatus.

Visual examination of these images shows that the three dimensionality of the scene is clearly captured by this diagnostic as indicated by various particles appearing in-focus at different optical depths. Additionally, comparison of the two figures demonstrates the scope of this experiment as the field of view is significantly larger when magnification is decreased.

From these focal stacks, particle locations were measured as shown in Fig. 7, which displays the center image at *z* = 0 mm from Fig. 6. This is overlaid with bright squares showing the in focus image of each particle at the determined depths. It should be noted that the blurred image of each particle appears in different in-plane positions in the *z* = 0 mm image due to the change in magnification as a function of depth. Particle diameter is indicated by color.

Figure 8 displays an isometric and in-plane view of the particle locations and measured displacements extracted from two images of the data set shown in Fig. 6. The known displacement between the two images was 20 mm. Measured displacement is indicated by vector length and color. From visual examination of these two views, some variation in the measured *z*-displacement is evident as expected.

It should also be noted that the measured particle diameters, shown by the grey scale, fall within a relatively narrow range of 2.35-2.5 mm. This range is reasonable based on the average diameter of 2.4 mm with a standard deviation of 0.06 mm determined using calipers. Further, detailed quantification of size uncertainty is beyond the scope of the current work.

The experimental error in particle depth measurements is determined in relation to the specified positions of the translation stage. Because the location of the nominal focal plane with respect to the *z* = 0 traverse position is not known with sufficient accuracy, this offset is determined with a data fitting procedure. Figure 9 demonstrates the analysis for the particle circled in the middle image in Fig. 5. First, the *z*-position of this particle is measured in each of the 51 images and a preliminary linear fit between these measurements and the corresponding traverse locations is determined. Next, a preliminary depth error is determined as the distance of each measurement from the linear fit. Any measurements producing a depth error of more than 5 mm are defined as outliers and removed to avoid their effect on the fit intercept. (Of all measurements, only approximately 0.03% are defined as outliers.) An updated linear fit is determined from the remaining measurements and the slope of this fit is forced to one. Measurement error is thus defined as the difference between measured particle positions and the linear fit at each traverse position shown in Fig. 9. Outliers are included in this final calculation.

Figure 10 provides an example of the range of depth error measurements determined in this experiment. This histogram shows the individual depth error measurements from the middle depth configuration where the magnification is 0.5. In the following analysis, a distinction is made between measures of accuracy and precision. Accuracy is quantified by the mean of the depth error just defined, while precision is quantified by the standard deviation, σ, in measured depth error. Plots for all other configurations are similar and not displayed here for brevity.

#### 3.1 Confirmation of theoretical depth resolution

All conditions were analyzed as discussed above. Measured depth precision is summarized in Table 2. Here, the standard deviation of the measured error, σ, is taken from all measurements which fall within the calibrated regions. Where necessary, data from the near, middle, and far configurations are combined to quantify the overall measurement precision for each magnification.

Comparison of the overall measured precision in the two right-most columns in Table 2 with the theoretical Δ*z* in the left column indicates that the theory roughly corresponds to ± 4σ. This is likely the first experimental confirmation that measured particle depth precision using a single plenoptic camera is bound by Eq. (1). Therefore, when designing new plenoptic measurements, Eq. (1) can provide a reasonable pre-test estimate of measurement precision.

In addition, the results in Table 2 also indicate that the uncertainty predicted by Eq. (1) may be overly conservative for many applications. Assuming normally distributed errors, the ± 4σ range predicted by Eq. (1) corresponds to a 99.994% confidence bound. If, for example, one is instead interested in predicting the 95% confidence bound (~ ± 2σ), Eq. (1) should be multiplied by 0.5. Other confidence bounds can be similarly determined.

Digital in-line holography (DIH) is a related single camera, 3D particle diagnostic [21]. Similar to the current results, the theoretical prediction of DIH uncertainty based on the depth of field of a refocused particle image tends to significantly exceed measurements of uncertainty when quantified by the standard deviation of errors [20]. In DIH it is also well known that the exact scaling between theory and experiment is a function of the image processing routines [18]. Although 3D particle measurements using plenoptic cameras has yet to be explored to the extent of DIH, it seems likely that the details of the data processing methodologies would affect the scaling between the measured uncertainty and the theory observed here. This should motivate future work to determine if alternative data processing routines could improve measurement precision.

#### 3.2 Effect of particle position

The previous section demonstrates that the overall depth uncertainty scales with Eq. (1). It is also interesting to investigate the local uncertainty as a function of *z*-depth, particularly as it relates to measurements both within and outside of *z _{N}* ≤

*z*≤

*z*. That is the goal of this sub-section.

_{F}Figure 11(a) displays the average depth displacement error as a function of *z*-location for each tested configuration. This plot includes only data within the depth included in the volumetric calibration, which is within 25 mm of the center of each configuration. The data displayed here was calculated by discretizing the volume in the *z*-direction and averaging the depth errors for displacements within each of these discretized regions. 99% confidence bounds are shown by the vertical bars and the relatively small confidence intervals indicate the statistical significance of this large data set. Examination of the errors in Fig. 11 do not show a significant trend in accuracy as a function of particle depth suggesting that volumetric calibration has reduced the depth error bias seen in previous work [1].

A normalization of this data is displayed in Fig. 11(b) where average depth error is normalized by the pitch of a microlens in object space, Δ*x*, and *z*-location is normalized by *z _{N}* for

*z*< 0 and

*z*for

_{F}*z*> 0. From this figure it is evident that errors generally lie within a range of 1.5·Δ

*x*, similar to previous work [22]. Furthermore, it is interesting to note that relatively good accuracy is obtained outside of

*z*≤

_{N}*z*≤

*z*. This can likely be attributed to the use of volumetric dewarping of the measurement domain, such that accurate results can be obtained outside of

_{F}*z*≤

_{N}*z*≤

*z*even though the ability to create tightly focused images is degraded.

_{F}Likewise, Fig. 12(a) displays the standard deviation of depth displacement error as a function of *z*-location for each configuration. Error bars again show 99% confidence bounds. Results show a clear trend of decreasing standard deviation (increased precision) with increasing magnification as is generally expected and predicted by Eq. (1). This trend largely collapses in Fig. 12(b) where measured σ is normalized by Δ*z*, again confirming the validity of Eq. (1). Within *z _{N}* ≤

*z*≤

*z*

_{F}the averaged measured standard deviation is roughly 0.125

*·*Δ

*z*, in agreement with the discussion of Table 2. Also, within this range there is no clear trend as a function of magnification or particle depth.

Outside of *z*_{N} ≤ *z* ≤ *z*_{F} the normalized standard deviation tends to increase, indicating that Eq. (4) provides a reasonable estimate of the depth range over which measurements are most precise. In contrast to the average error in Fig. 12, volumetric dewarping does not remove this effect. This is likely because a dewarping procedure does not generally affect the local sharpness of particle images, which is the main driver of measurement precision. On the other hand, the decreased precision is somewhat gradual outside of *z*_{N} ≤ *z* ≤ *z*_{F}. This indicates that measurement uncertainty may be acceptable for some applications over depth ranges larger than those given by Eq. (4). Therefore, the quantitative results shown in Fig. 12 will provide useful guidance for design of new plenoptic measurements. Finally, it should again be noted that results are also likely to depend on the details of the image processing routines. It would be informative to repeat this analysis with alternative processing methodologies to determine if further improvements are possible.

#### 3.3 In-plane error

Though measurement of in-plane error is not the primary goal of this work, a general analysis of in-plane uncertainty can be made based on the fact that physical translation of the particle field was aligned with the depth direction, therefore, error can be defined as any measured movement in the *x*- or *y*-directions. Similar to the analysis of the *z*-error, in-plane error is found by fitting the *x*- and *y*-positions to a constant value with respect to traverse position. Error is thus defined as the measured deviation from this constant.

Error in the *x*- and *y*-directions were analyzed, and as expected results were found to be similar in both directions. For brevity, only the *x*-error is considered here. As is the case for the *z*-error in Fig. 11, the average error in the *x*-direction was found to be roughly constant as a function of position and less than ~ ± 0.05·Δ*x*. This is thought to be a reflection of the accuracy of the global dewarping procedures.

Again, the standard deviation in *x*-positional error is considered as a metric of precision. Figure 13(a) shows the measured results as a function of measured particle depth, *z*, while Fig. 13(b) shows normalized quantities. Comparison of the magnitude of the results in Fig. 13(a) with the analogous plot of *z*-error in Fig. 12(a) indicates that the standard deviation in the in-plane error is roughly an order of magnitude less than in the *z*-direction. Figure 13(b) also indicates that the measured standard deviation is a fraction of the size of a single refocused pixel, Δ*x*. This is likely a reflection of the use of sub-pixel region centroid approximation common to image processing routines. Finally, Fig. 13(b) does indicate reduced precision outside of *z*_{N} ≤ *z* ≤ *z*_{F} and some collapse of the data by normalization. Although, it should be reiterated that the current work was not focused on the quantification of these in-plane errors, and additional experimentation is warranted to check the consistency of the observed trends. In particular, the relative particle size as a function of magnification and the effect of particle size on the particle location method was not examined.

#### 3.4 Application to drop impact

Finally, we return to the application to drop impact presented in Fig. 1, and detailed in Hall et al. [4]. In that experiment, the 3D size, location, and three-component velocity of the secondary droplets were quantified. In contrast to the current experiment, an expected particle displacement (velocity) was not known *a-priori* for every measured particle. Instead, overall error was estimated based on expected flow symmetry about the point of impact. Displacements in the in-plane direction were assumed to have negligible uncertainty and any additional scatter in the depth direction was assumed to be measurement error. Further details of this analysis are given in Hall et al. [4].

Results in Hall et al. [4] suggested an overall standard deviation in measurement uncertainty of approximately 0.175 mm [4]; however, this value included all measured particles. When only the measured particles within *z _{N}* ≤

*z*≤

*z*are considered this value is reduced slightly, to 0.150 mm. Given the experimental parameters for this data set, including a nominal magnification of 0.82, the theoretical depth uncertainty is Δ

_{F}*z*= 0.99 mm. Using the scaling suggested by this work, the predicted standard deviation is 0.125·Δ

*z*= 0.13 mm. Given the various approximations necessary to derive the uncertainty measured in Hall et al. [4], the overall agreement between theory and measured uncertainty is promising.

The current work further suggests that the uncertainty in a similar drop impact experiment could be improved by adjustment of the physical experimental parameters, specifically by designing an objective and microlens combination with a higher magnification that minimizes Δz while maximizing the measurable depth range, *z _{N}* ≤

*z*≤

*z*.

_{F}## 4. Conclusions and future work

Analysis of the experimental results in this work shows overall agreement with the theoretical values of uncertainty in 3D particle localization using a single plenoptic camera. For all optical configurations considered here, the mean displacement error in both the depth, *z*, and in-plane, *x*-*y*, directions is shown to roughly correspond with the size of a refocused pixel over the entire measurement volume considered. This is thought to be a reflection of the accuracy of the volumetric dewarping method. Likewise, the predicted precision in the *z*-direction, based on the depth of field of a numerically refocused image, is shown to roughly correspond with ± 4 times the standard deviation of measured depth error, again, confirming theory. Finally, precision is shown to be optimum within the overall predicted depth of field of the plenoptic sub-images with gradual degradation outside of this range.

These results can be used to optimize future plenoptic configurations by adjusting front objective and microlens optical parameters to meet the desired resolution and measurement domain as predicted by Eqs. (1)-(4). In addition, the quantitative results provided here will allow future investigators to tune accuracy and precision to meet application specific tolerance bands.

To further characterize these uncertainties, areas of future work could include examination of similar experiments in which the size of the particles is varied and in-plane translation is included. Other variations not examined here that might influence uncertainty include particle shape and density. Alternative data processing methods could also affect the measurement uncertainties. Possible methods include a perspective shift based algorithm [23], 3D deconvolution [24], or sparsity based methods [25], which may provide additional insight with the potential for improved uncertainty and/or reduced computational requirements.

## Funding

Sandia National Laboratories (DE-NA0003525).

## References and links

**1. **T. W. Fahringer, K. P. Lynch, and B. S. Thurow, “Volumetric particle image velocimetry with a single plenoptic camera,” Meas. Sci. Technol. **26**(11), 115201 (2015). [CrossRef]

**2. **H. Chen and V. Sick, “Three-dimensional three-component air flow visualization in a steady-state engine flow bench using a plenoptic camera,” SAE Int. J. Engines **10**(2), 625–635 (2017). [CrossRef]

**3. **J. Klemkowsky, T. Fahringer, C. Clifford, B. Bathel, and B. Thurow, “Plenoptic background oriented schlieren imaging,” Meas. Sci. Technol., in-press (2017).

**4. **E. M. Hall, B. S. Thurow, and D. R. Guildenbecher, “Comparison of three-dimensional particle tracking and sizing using plenoptic imaging and digital in-line holography,” Appl. Opt. **55**(23), 6410–6420 (2016). [CrossRef] [PubMed]

**5. **H. Chen, V. Sick, M. A. Woodward, and D. Burke, “Human iris 3D imaging using a micro-plenoptic camera,” Opt. Life Sci. **2017**, 8–10 (2017).

**6. **K. C. Johnson, B. S. Thurow, T. Kim, G. Blois, and K. T. Christiansen, “Volumetric velocity measurements in the wake of a hemispherical roughness element,” AIAA J. **55**(7), 2158–2173 (2017). [CrossRef]

**7. **P. Drap, J. P. Royer, M. M. Nawaf, M. Saccone, D. Merad, À. López-Sanz, J. B. Ledoux, and J. Garrabou, “Underwater photogrammetry, coded target and plenoptic technology: a set of tools for monitoring red coral in mediterranean sea in the framework of the ”Perfect” project,” in ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci **XLII-2**(W3), 275–282 (2017). [CrossRef]

**8. **T. T. Truscott, J. Belden, R. Ni, J. Pendlebury, and B. McEwen, “Three-dimensional microscopic light field particle image velocimetry,” Exp. Fluids **58**(3), 16 (2017). [CrossRef]

**9. **M. Jambor, V. Nosenko, S. K. Zhdanov, and H. M. Thomas, “Plasma crystal dynamics measured with a three-dimensional plenoptic camera,” Rev. Sci. Instrum. **87**(3), 033505 (2016). [CrossRef] [PubMed]

**10. **E. A. Deem, D. Agentis, F. Nicolas, L. N. Cattafesta, T. Fahringer, and B. Thurow, “A canonical experiment comparing tomographic and plenoptic PIV,” in 10th Pacific Symp. Flow Visulaization Image Process. (2015), pp. 15–18.

**11. **T. Georgiev and A. Lumsdaine, “Superresolution with plenoptic 2.0 cameras,” Signal Recover. Synth. 6–8 (2009). [CrossRef]

**12. **S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. **36**(3), 606–619 (2014). [CrossRef] [PubMed]

**13. **X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 20,” Opt. Express **25**(9), 9947–9962 (2017). [CrossRef] [PubMed]

**14. **F. V Pepe, F. Di Lena, A. Mazzilli, G. Scarcelli, M. D. Angelo, M. Storico, R. Enrico, I.- Roma, S. Bari, and I.- Bari, “Diffraction-limited plenoptic imaging with correlated light,” Cornell Univ. Libr. 1–8 (2017).

**15. **E. A. Deem, L. N. Cattafesta, T. W. Fahringer, and B. S. Thurow, “On the resolution of plenoptic PIV,” Meas. Sci. Technol. **27**(8), 084003 (2016). [CrossRef]

**16. **R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech Rep. CTSR 1–11 (2005).

**17. **M. Levoy, “Light field photography, microscopy, and illumination,” in International Optical Design Conference (2010), pp. 6–8.

**18. **J. Gao, D. R. Guildenbecher, P. L. Reu, and J. Chen, “Uncertainty characterization of particle depth measurement using digital in-line holography and the hybrid method,” Opt. Express **21**(22), 26432–26449 (2013). [CrossRef] [PubMed]

**19. **T. Khanam, M. Nurur Rahman, A. Rajendran, V. Kariwala, and A. K. Asundi, “Accurate size measurement of needle-shaped particles using digital holography,” Chem. Eng. Sci. **66**(12), 2699–2706 (2011). [CrossRef]

**20. **D. R. Guildenbecher, J. Gao, P. L. Reu, and J. Chen, “Digital holography simulations and experiments to quantify the accuracy of 3D particle location and 2D sizing using a proposed hybrid method,” Appl. Opt. **52**(16), 3790–3801 (2013). [CrossRef] [PubMed]

**21. **J. Katz and J. Sheng, “Applications of holography in fluid mechanics and particle dynamics,” Annu. Rev. Fluid Mech. **42**(1), 531–555 (2010). [CrossRef]

**22. **E. M. Hall, T. W. Fahringer, and B. S. Thurow, “Volumetric calibration of a plenoptic camera,” AIAA SciTech Forum, 55th Annu. Aerosp. Sci. Meet. 1–13 (2017). [CrossRef]

**23. **N. Zeller, F. Quint, and U. Stilla, “Depth estimation and camera calibration of a focused plenoptic camera for visual odometry,” ISPRS J. Photogramm. Remote Sens. **118**, 83–100 (2016). [CrossRef]

**24. **J. T. Bolan, “Enhancing Image Resolvability in Obscured Environments Using 3D Deconvolution and a Plenoptic Camera,” Auburn University, MS Thesis (2015).

**25. **H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express **23**(11), 14461–14471 (2015). [CrossRef] [PubMed]