Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Underwater plenoptic cameras optimized for water refraction

Open Access Open Access

Abstract

By inserting a microlens array (MLA) between the main lens and imaging sensor, plenoptic cameras can capture 3D information of objects via single-shot imaging. However, for an underwater plenoptic camera, a waterproof spherical shell is needed to isolate the inner camera from the water, thus the performance of the overall imaging system will change due to the refractive effects of the waterproof and water medium. Accordingly, imaging properties like image clarity and field of view (FOV) will change. To address this issue, this paper proposes an optimized underwater plenoptic camera that compensates for the changes in image clarity and FOV. Based on the geometry simplification and the ray propagation analysis, the equivalent imaging process of each portion of an underwater plenoptic camera is modeled. To mitigate the impact of the FOV of the spherical shell and the water medium on image clarity, as well as to ensure successful assembly, an optimization model for physical parameters is derived after calibrating the minimum distance between the spherical shell and the main lens. The simulation results before and after underwater optimization are compared, which confirm the correctness of the proposed method. Additionally, a practical underwater focused plenoptic camera is designed, further demonstrating the effectiveness of the proposed model in real underwater scenarios.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging 3D objects underwater is of great importance in a variety of tasks such as underwater environment exploration [1,2], biological protection [3,4] and robot navigation [58]. However, because of the instability of the water flow and floating in the water, setting multi-sensor-based systems, like camera array, multi-camera and depth camera, etc., are not applicable in real-time 3D imaging underwater. Plenoptic camera [9], which can capture 4D spatial and angular information via a single shot by inserting a microlens array (MLA) between the main lens and imaging sensor, shows high potential in these application areas because of its compact and stable architecture and high imaging speed adapting to the water flow.

The underwater application of a plenoptic camera necessitates the incorporation of a waterproof shell positioned in front of the lens. Compared to a flat shell, which reduces the field of view (FOV), a spherical shell presents the potential to preserve the FOV by appropriately positioning the inner camera [10,11], which is utilized in our underwater plenoptic camera system. Nevertheless, the underwater environment poses challenges, including absorption, scattering, and refraction, which can degrade the imaging performance of the plenoptic camera that has been optimized for air conditions. While the effects of absorption and scattering are typically alleviated through postprocessing techniques [12,13], it becomes essential to optimize the underwater plenoptic camera for water refraction. As shown in Fig. 1, after adding a spherical shell, two times of refraction are added during the ray propagation process, and the geometric parameters of the spherical shell will lead to imaging property variations in image clarity. In addition, the difference in the refractive index of air and water will also cause a difference in the ray refraction process, further causing changes in imaging characteristics such as FOV and image clarity. Therefore, it is necessary to optimize the physical parameters such as the distance between the spherical shell and the main lens, and the distance between the main lens and the MLA during the design and assembly process of the underwater plenoptic camera.

 figure: Fig. 1.

Fig. 1. The configuration of: (a) the focused plenoptic camera and corresponding subaperture image crop; and (b) the underwater focused plenoptic camera adding a spherical shell and its corresponding subaperture image crop.

Download Full Size | PDF

To the best of our knowledge, there lacks scientific research addressing the issue of imaging quality degradation caused by the water and the waterproof shell during the design and optimization of underwater plenoptic cameras. However, there exists some work on either optimizing plenoptic cameras in the air or designing cameras for underwater use. They can be divided into two groups: analysis or optimization of plenoptic cameras in the air [1417] and design of underwater cameras [18,19]. To analyze or optimize a plenoptic camera in the air, C. Perwaß et al. [14] described the relationship between its optical parameters (such as object distance, image distance, and distance between MLA and sensor) and performance such as angular resolution, depth of field, etc., providing an initial direction for camera design. V. Drazic et al. [15], T. Li et al. [16] and T. Michels et al. [17] proposed improved plenoptic camera models with better depth discrimination, image resolution or depth of field, which provide valuable insights for optimizing plenoptic cameras in the air but do not solve the problem of image clarity degradation and FOV change when using the plenoptic camera underwater. For the design of underwater cameras, R. Qu et al. [18] have developed an underwater optical system that meets the required FOV but their method cannot be directly applied to optimize underwater plenoptic cameras due to the difference in structure. Guatek company [19] utilized the Raytrix R5 to design an underwater plenoptic camera with a flat shell, which reduces the FOV to 1/1.33 compared to that in air, with limited optimization. Moreover, the design method is not suitable for optimizing the underwater plenoptic camera with a spherical shell. In summary, although existing works offer some strategies for optimizing plenoptic cameras in the air or designing cameras for underwater use, none of them has successfully designed and optimized high-performance underwater plenoptic cameras with a spherical shell while maintaining the imaging performance. Therefore, an optimization method that considers both image clarity and FOV is highly desired for underwater plenoptic cameras.

This paper proposes a ray propagation model and physical parameter optimization method for the underwater plenoptic camera. First, after adding a spherical shell in front of the main lens, a geometric optics model for the plenoptic camera with a spherical shell is conducted to quantify the object-image relationship, based on ray propagation analysis. Secondly, after the calibration of the minimum distance between the spherical shell and the main lens, an optimization model for the underwater plenoptic camera is proposed to reduce the impact of the spherical shell and water propagation, while ensuring successful assembly. This optimization method effectively improves the performance in FOV of the underwater plenoptic camera and the image quality captured by the underwater plenoptic camera. The correctness and robustness of the proposed model to keep image clarity and FOV are verified by simulation, and the effectiveness of the proposed model is further demonstrated by designing and testing a real underwater plenoptic camera in the underwater.

The rest of this paper is organized as follows. Section 2 describes the proposed ray propagation model and optimization method for the underwater plenoptic camera. Experimental results are provided in Section 3 followed by conclusions in Section 4.

2. Ray propagation model and optimization method proposed for underwater plenoptic cameras

Unlike traditional plenoptic cameras, underwater plenoptic cameras are required to image objects in water, rather than in air. Furthermore, a waterproof spherical shell with the functionality of water resistance and optical transmission is added before the main lens of plenoptic cameras to protect the inner system from being soaked in water and transmit the rays from the water to the inner plenoptic camera. These new features sacrifice the imaging performance unless physical parameters are optimized, such as optimizing the distance between the shell and the main lens, optimizing the distance between the main lens and the MLA, etc. To compensate for the loss of imaging performance, an optimization model for those distances is proposed. Ray propagation analysis for the spherical shell is proposed in this section by analyzing its unique optical properties in the water, thus giving an overall underwater plenoptic camera geometric optics model. By combining the geometric optics model with plenoptic theory, the clarity optimization model is derived first to improve image clarity. Moreover, to keep the FOV of the underwater plenoptic camera and ensure successful assembly, a joint optimization model is then proposed to optimize image clarity and FOV simultaneously after the calibration for the minimum distance between the spherical shell and main lens in the underwater plenoptic camera.

2.1 Geometric optics model for the spherical shell of underwater plenoptic cameras

To analyze the ray propagation process in the underwater plenoptic camera, a geometric optics model for the spherical shell is proposed here. Without loss of generality and simplicity, the shell can be modeled as a two-layer monocentric spherical refraction system, which can be analyzed using a two-stage refraction process. Using Galilean-mode [9,20] underwater plenoptic camera with a spherical shell as an instance, the optical structure is shown in Fig. 2, based on which the model is derived. For Kepler-mode [9,20] underwater plenoptic camera, the following derivations can be directly transferred by placing the immediate image plane in front of the image plane, which also means replacing ${a}$ with $- {a}$.

 figure: Fig. 2.

Fig. 2. Ray propagation process of Galileo-mode underwater plenoptic camera.

Download Full Size | PDF

As shown in Fig. 2, when considering the spherical shell, the convergence of rays originating from the paraxial object point A may not result in a perfect focal point due to spherical aberration. However, it is still possible to approximate the formation of a small image point using Gaussian optics [21] and paraxial ray analysis [22], as follows:

$$\frac{{{n_{SHELL}}}}{{{l_{OA^{\prime}}}}} - \frac{{{n_{WATER}}}}{{ - {d_{OBJ\_A}}}} = \frac{{({{n_{SHELL}}\; - \; {n_{WATER}}} )}}{{{l_{OC}}}}$$

Equation (1) assumes that object point A is located near the optical axis and introduces more errors as the distance between point A and the optical axis increases. This equation is commonly used to calculate the focal length of a convex or concave lens [23,24], under the assumption of a distant object (${d_{OBJ\_A}} = \textrm{infinity}$), where the distance ${l_{OA^{\prime}}}$ becomes the focal length. To keep near-axis objects in focus, the following two-stage refraction process is analyzed based on Eq. (1)

In stage 1, paraxial object point A is imaged by the shell’s outer surface to point $A^{\prime}$, the refraction process follows:

$${l_{OA^{\prime}}} = \frac{{{n_{SHELL}}}}{{\frac{{({{n_{SHELL}}\; - \; {n_{WATER}}} )}}{{{l_{OC}}}} + \; \frac{{{n_{WATER}}}}{{ - {d_{OBJ\_A}}}}}}$$
where ${n_{SHELL}}$ represents the refractive index of the spherical shell; ${n_{WATER}}$ represents the refractive index of water; ${l_{OC}}$ denotes the radius of the outer spherical shell; ${d_{OBJ\_A}}$ is the object distance between the object point A and point O; point O is the intersection of the main optical axis and the outer spherical shell; point C is the center of the spherical shell.

In stage 2, the inner spherical shell shutters the ray transferred from the first refraction. The rays can be traced back to produce a virtual point $A^{\prime\prime}$. The position of point $A^{\prime\prime}$ follows:

$${l_{O^{\prime}A^{\prime\prime}}} ={-} \frac{{{n_{AIR}}}}{{\frac{{({{n_{AIR}} - {n_{SHELL}}} )}}{{{l_{O^{\prime}C}}}} + \frac{{{n_{SHELL}}}}{{{l_{O^{\prime}A^{\prime}}}}}}}$$
where ${n_{AIR}}$ represents the refractive index of air; ${l_{O^{\prime}C}}$ denotes the radius of the inner spherical shell; ${l_{O^{\prime}A^{\prime}}}$ denotes the distance between point $O^{\prime}$ and point $A^{\prime}$; ${l_{O^{\prime}A^{\prime\prime}}}$ denotes the distance between point $O^{\prime}$ and point $A^{\prime\prime}$; point $O^{\prime}$ is the intersection of the main optical axis and the inner spherical shell.

Adding the condition in Fig. 2 that ${l_{O^{\prime}A^{\prime}}} = {l_{OA^{\prime}}} - {l_{O^{\prime}O}}$, substituting Eq. (2) into Eq. (3) and replacing ${l_{O^{\prime}A^{\prime}}}$ by ${l_{OA^{\prime}}} - {l_{O^{\prime}O}}$, gives a relationship between ${l_{O^{\prime}A^{\prime\prime}}}$ and ${d_{OBJ\_A}}$ by

$${l_{O^{\prime}A^{\prime\prime}}} ={-} \frac{{{n_{AIR}}}}{{{C_1} + \frac{{{n_{SHELL}}({{C_2}{d_{OBJ\_A}} - {n_{WATER}}} )}}{{{n_{SHELL}}{d_{OBJ\_A}} - {l_{O^{\prime}O}}({{C_2}{d_{OBJ\_A}} - {n_{WATER}}} )}}}}$$
where ${C_1} = \frac{{{n_{AIR}} - {n_{SHELL}}}}{{{l_{O^{\prime}C}}}}$ and ${C_2} = \frac{{{n_{SHELL}}\; - \; {n_{WATER}}}}{{{l_{OC}}}}$; ${l_{O^{\prime}O}}$ denotes the thickness of spherical shell.

Then the distance between virtual point $A^{\prime\prime}$ and the main lens, i.e., the equivalent object distance for the main lens, is given by:

$${l_{A^{\prime\prime}M}} = {l_{O^{\prime}A^{\prime\prime}}} + {l_{O^{\prime}C}} + e,$$
where e denotes the distance between the center of the spherical shell and that of the main lens, we define $e < 0$ when the principal point of the main lens lies on the left of the center of the spherical shell.

As shown in Eq. (5), given the fixed geometric parameters of the spherical shell, the equivalent object ${l_{A^{\prime\prime}M}}$ is a function of ${l_{O^{\prime}A^{\prime\prime}}}$ and e, in which ${l_{O^{\prime}A^{\prime\prime}}}$ is a function of ${d_{OBJ\_A}}$ defined in Eq. (4). After modeling the paraxial ray propagation process of a spherical shell, it is possible to judge if the object point is in focus. In other words, if we can design an underwater plenoptic camera that follows Eq. (5), we can make sure that the camera can capture clear plenoptic images in the object distance we set.

2.2 Optimization method for the underwater plenoptic camera

Based on the derivations in Eq. (5), the difference between the equivalent object distance ${l_{A^{\prime\prime}M}}$ and ${d_{OBJ\_A}}$ is found, which explains the reason why the central perspective image in Fig. 1(b) becomes blurred when adding a spherical shell directly. So, the clarity optimization model is derived first to reduce blurry effects. Then the minimized distance between the spherical shell and the main lens is obtained by calibration, based on which a joint optimization model is proposed to optimize image clarity and FOV at the same time.

Reducing the blurry effects underwater is to make the camera focus on the object. According to plenoptic imaging theory [9,14], if the object distance changed from ${d_{OBJ\_A}}$ to ${l_{A^{\prime\prime}M}}$, the image distance v, the distance between the MLA and the image plane, b, and the distance between the image point of the main lens and the MLA, a, should be changed. However, distance a and b also need to follow [25]:

$$- \frac{1}{a} + \frac{1}{b} = \frac{1}{{{f_{MLA}}}}$$
and
$$\frac{a}{b} = N,$$
where ${f_{MLA}}$ denotes the focal length of the MLA, and N is the angular resolution of the plenoptic camera. Since both of them should be fixed during design, a and b should be fixed after adding a spherical shell. So, the image distance becomes the only parameter that can be changed to satisfy the Gauss equation [26]:
$$\frac{1}{{{l_{A^{\prime\prime}M}}}} + \frac{1}{{v + \Delta v}} - \frac{1}{f} = 0,$$
where f denotes the focal length of the main lens; $\Delta v$ is the image distance change of the main lens.

Equation (8) provides an implicit expression of $\Delta v$ related to ${l_{A^{\prime\prime}M}}$, which is a function of ${d_{OBJ\_A}}$ and e. Using this equation, $\Delta v$ can be optimized as a variable while fixing the object distance ${d_{OBJ\_A}}$ and distance e, which leads to the focus of the underwater plenoptic camera and improvement of image clarity. The optimization that satisfies Eq. (8) is also called clarity optimization in the following part.

During the clarity optimization process, variations in the parameter e can lead to a set of solution pairs for $\Delta v$ and e. To keep the FOV of the underwater plenoptic camera as close as the inner plenoptic camera, the distance e should be as small as possible [10]. In the actual assembly of the underwater plenoptic camera, if the distance e between the principal point of the main lens and the spherical shell is set to 0 mm, adjusting the main lens to the center of the spherical shell can help minimize refraction effects [27]. This alignment ensures that the field of view (FOV) remains consistent between the main lens and the inner camera, as depicted in Fig. 3 (b).

 figure: Fig. 3.

Fig. 3. The relationship between FOV of the inner camera and underwater camera for different e.

Download Full Size | PDF

In addition to the case mentioned above where $e = 0$, there are two more situations where the distance e can vary. In Fig. 3(a), when the distance e is less than 0, the field of view (FOV) of the underwater camera ${\alpha _{WATER}}$ is larger than that of the inner camera $\alpha $ while keeping the FOV of the inner camera fixed. When the principal point of the inner camera is located right on the inner wall of the spherical shell ($e ={-} {l_{O^{\prime}C}}$), the situation becomes similar to a flat shell, as the curvature of the spherical shell appears to have a radius of infinity for the inner camera.

On the other hand, in Fig. 3(c), as the distance e increases beyond 0, the FOV ${\alpha _{WATER}}$ of the underwater camera becomes smaller than the FOV $\alpha $ of the inner camera. However, the distance e cannot exceed ${l_{O^{\prime}C}}/\textrm{tan}({\mathrm{\alpha }/2} )$ to maintain the inner camera's ability to observe the surrounding environment through the spherical shell, thereby preventing the occurrence of black edges in the captured images.

In practice, the distance e is constrained because of the nontrivial distance between the front surface of the main lens and its principal plane. Figure 4 shows the diagram of the underwater plenoptic camera with a commercial lens. In this configuration, the commercial lens is positioned closest to the spherical shell so ${l_{bound}}$ represents the minimum distance between the center of the spherical shell, C, and the optical center of the commercial lens, $C^{\prime}$. Consequently, the distance e must satisfy $e \ge {l_{bound}}$, in case of assembly. For a given commercial lens, the exact position of its optical center is difficult to retrieve as lacking detailed information on the optical structure. So, calibration for ${l_{bound}}$ is conducted as follows.

 figure: Fig. 4.

Fig. 4. The diagram of the underwater plenoptic camera with a commercial lens.

Download Full Size | PDF

First, we employ our calibration method for the inner plenoptic cameras [28,29]. This method utilizes a triple-level calibration board consisting of three separate boards with known depth disparity as the calibration object. By analyzing different optical parallaxes at various depths, we derive the geometric parameters, specifically the distance between the principal plane of the main lens and the principal plane of the MLA, denoted as d, as well as the distance between MLA and sensor, denoted as b. Secondly, as shown in Fig. 4, for a paraxial calibration object point B, suppose the image points of B in adjacent microlenses are ${P_0}$ and ${P_1}$, and the distance between the two points is ${l_{{P_0}{P_1}}}$, object distance of MLA ${a_B}$ can be derived based on similar triangle principle [30] by substituting b into:

$${a_B} = \frac{{b{D_{MLA}}}}{{{D_{MLA}} - {l_{{P_0}{P_1}}}}},$$
where ${D_{MLA}}$ is the diameter of the microlens. The image distance of the main lens ${v_B}$ can be obtained by:
$${v_B} = {a_B} + d$$

Since the equivalent object distance of object point B, i.e., ${l_{B^{\prime\prime}C^{\prime}}}$, follows Gauss equation:

$$\frac{1}{{{l_{B^{\prime\prime}C^{\prime}}}}} + \frac{1}{{{v_B}}} = \frac{1}{f},$$
and ${l_{B^{\prime\prime}C^{\prime}}}$ also satisfies ${l_{O^{\prime}A^{\prime\prime}}} = {l_{bound}} + {l_{O^{\prime}C}} + {l_{O^{\prime}B^{\prime\prime}}}$, where ${l_{O^{\prime}C}}$ is the radius of the inner spherical she and ${l_{O^{\prime}B^{\prime\prime}}}$ is the distance between point $O^{\prime}$ and virtual point $B^{\prime\prime}$. Thus ${l_{bound}}$ is given by:
$${l_{bound}} = {l_{B\mathrm{^{\prime\prime}}C}} - {l_{O^{\prime}C}} - {l_{O^{\prime}B^{\prime\prime}}},$$
where ${l_{O^{\prime}C}}$ is generally given during design; ${l_{O^{\prime}B^{\prime\prime}}}$ can be derived from Eq. (4), ${l_{bound}}$ can be obtained by Eq. (12).

As a result, to combine the condition $e \ge {l_{bound}}$ and keep the FOV of the underwater plenoptic camera close to that of the inner plenoptic camera, the distance e should follow:

$$e = \left\{ {\begin{array}{{c}} {0,\; \; {l_{bound}} < 0\; }\\ {{l_{bound}},\; {l_{bound}} \ge 0\; } \end{array}} \right.$$

The optimization that satisfies Eq. (13) is also called FOV optimization. Thus, the model satisfying Eq. (5), Eq. (8) and Eq. (13) improves image clarity and optimizes FOV simultaneously, which is so-called joint optimization in the following part.

In addition, when the image distance is modified from v to $v + \Delta v$, the aperture size of the main lens needs to be adjusted accordingly to maximize the utilization of the image sensor [14], which can be expressed as follows:

$$\frac{{v + \Delta v - a}}{{{D_{aper}} + \Delta {D_{aper}}}} = \frac{b}{{{D_{MLA}}}},$$
where ${D_{aper}}$ is the aperture size of the main lens before adding a spherical shell, which also satisfies $({v - a} )/{D_{aper}} = b/{D_{MLA}}$; $\Delta {D_{aper}}$ is the aperture size change of the main lens. Thus, working f-number of MLA stays $b/{D_{MLA}}$, while working f-number of main lens changes from $v/{D_{aper}}$ to $({v + \Delta v} )/({{D_{aper}} + \Delta {D_{aper}}} )$.

Figure 5 summarizes the process of optimizing image clarity and FOV: Using a plenoptic camera and a spherical shell, an underwater plenoptic camera is built with initial values of $\Delta v$ and e. Then, from the geometric parameters of the spherical shell, ray propagation analysis of the spherical shell using Eq. (5) is derived to obtain expressions of equivalent object distance for the main lens. Following ray propagation analysis, clarity optimization uses Eq. (5) and Eq. (8) to obtain a set of solution pairs for $\Delta v$ and e due to the variations of e, which leads to the improvement of image clarity. Next, through calibration, the minimum distance between the optical center of the commercial main lens and the center of the spherical shell, ${l_{bound}}$, is obtained. Then, the FOV optimization model, using Eq. (13) and the calibrated value of ${l_{bound}}$ to find optimized e, is executed to find a unique solution for $\Delta v$ and e. The joint optimization model, which combines clarity optimization and FOV optimization, uses Eq. (5), Eq. (8) and Eq. (13) to optimize clarity and FOV simultaneously. Then after adjusting the aperture size of the main lens using Eq. (14), optimized parameters of underwater plenoptic camera are designed.

 figure: Fig. 5.

Fig. 5. The process of optimizing an underwater plenoptic camera.

Download Full Size | PDF

It is important to note that during the optimization of an underwater plenoptic camera, certain parameters such as the radius and refractive index of the spherical shell, refractive index of water, and object distance are considered as prior information. In the event of changing circumstances, these parameters can be adjusted and utilized for re-optimizing the underwater plenoptic camera using the same optimized procedures shown in Fig. 5. This may involve directly modifying these parameters, such as adjusting the radius and refractive index of the spherical shell, or taking into account hidden variables that can influence these parameters, such as the effects of temperature [31] or electrolyte concentration [32] (e.g., salt content) of the seawater on the refractive index of water.

After computing the optimized distances $\Delta v$ and e, an adjustment strategy is employed during the actual assembly process as outlined below. Firstly, $\Delta v$ is adjusted in air condition. By utilizing our calibration method [28,29], we can calibrate the image distance of the main lens ${v_{ADJ\_1}}$ after the initial adjustment and compare it with the image distance v, then we calculate the difference before and after the initial adjustment, which can be expressed:

$$\Delta {v_1} = {v_{ADJ\_1}} - v,$$
if the difference $\Delta {v_1}$ is smaller than the desired value $\Delta v$, we perform additional adjustments to meet the requirements. Conversely, if $\Delta {v_1}$ is greater than $\Delta v$, we adjust the plenoptic camera in the opposite direction. This adjustment process is repeated until the absolute difference between $\Delta {v_1}$ and $\Delta v$ is smaller than the predefined tolerance error $\Delta {v_{TOL}}$.

Next, we proceed to adjust the distance e for the underwater plenoptic camera after incorporating the spherical shell. To achieve this, the plenoptic camera’s outer radius is designed to match the inner radius of the waterproof housing. By ensuring that the optical axis of the plenoptic camera aligns with the middle of the housing, we can guarantee that their optical axes remain in the same line [33]. Consequently, the distance e can be adjusted in a one-dimensional manner to regulate the separation between the plenoptic camera and the spherical shell, thereby ensuring that the image points of the object at adjacent microlens distances ${l_{{P_0}{P_1}}}$ in Fig. 4 adhere to the following relationship, which serves as an indication of the plenoptic camera being in focus:

$${l_{{P_0}{P_1}}} = \frac{{a - b}}{a}{D_{MLA}},$$
where a represents the object distance of the MLA, which is expressed by Eq. (6) and Eq. (7).

2.3 DoF analysis of the optimized underwater plenoptic cameras

The underwater plenoptic camera can capture underwater angular and spatial information with extended depth of field, allowing for certain variations in object distance resulting from object movement or positional errors. However, when substantial changes in object distance occur, it becomes essential to evaluate the necessity of re-optimizing the entire underwater plenoptic camera. To address this concern, we propose a depth of field (DoF) analysis for the optimized underwater plenoptic camera. The purpose of this analysis is to get depth of field of the underwater plenoptic camera and evaluate whether a re-optimization of the camera is necessary when there are significant changes in the object distance.

It is well known that a camera image is in focus within a certain range of distances from the lens. Beyond this range, the image becomes defocused and can be characterized by a blur circle. For the underwater plenoptic camera, the range of distances that appear in focus corresponds to a blur circle, denoted as $\delta ^{\prime}$, which satisfies the condition $\delta ^{\prime} \le \delta $, where $\delta $ represents the maximum acceptable blur diameter. Accordingly, the virtual object distance ${l_{D^{\prime\prime}M}}$ in Fig. 6 is constrained to make the plenoptic camera in focus, at the same time ${l_{D^{\prime\prime}M}}$ is a function of ${d_{OBJ\_D}}$ described by Eq. (5), which is shorter or longer than the virtual object distance that is set currently. Using the process to calculate the diameter of blur circle for shorter focused virtual distance ${l_{D^{\prime\prime}M}}$ as an instance, the diameter of blur circle ${\delta _D}$ is derived. It is noted that during the design of plenoptic cameras, the pixel size cannot be small [34] and diffraction effects are neglected in the following derivations.

 figure: Fig. 6.

Fig. 6. Relationship between object distance and blur circle.

Download Full Size | PDF

After using Eq. (5) to calculate the shorter focused virtual distance ${l_{D^{\prime\prime}M}}$, image distance of main lens ${v_D}$ can be obtained as follows:

$$\frac{1}{{{l_{D^{\prime\prime}M}}}} + \frac{1}{{{v_D}}} = \frac{1}{f}.$$

Using ${v_D}$ from Eq. (17) and calibration value d, object distance of MLA ${a_D}$ can be expressed as ${a_D} = {v_D} - d$, so image distance ${b_D}$ for MLA can be calculated by Gaussian equation:

$$- \frac{1}{{{a_D}}} + \frac{1}{{{b_D}}} = \frac{1}{{{f_{MLA}}}}.$$

Based on similar triangle principle, relationships between ${\delta _D}$ and ${b_D}$ can be given by:

$$\frac{{{\delta _D}}}{{{D_{MLA}}}} = \frac{{{b_D} - b}}{{{b_D}}},$$
${\delta _D} < \delta $ corresponds to the object distance ${d_{OBJ\_D}}$ that is in the range of DoF. If ${\delta _D} > \delta $, corresponding to the defocused of the underwater plenoptic camera, the underwater plenoptic camera needs to be re-optimized by calculating $\Delta v$ using Eq. (8) since e is already fixed.

Other than ${d_{OBJ\_D}}$, Eq. (17), Eq. (18) and Eq. (19) can also be utilized to analyze the tolerance for installation errors since the relationship between ${l_{D^{\prime\prime}M}}$ and installation parameter e is already given by Eq. (5).

3. Experimental results

In this section, the correctness and effectiveness of the proposed optimization method are demonstrated by testing on the simulated system and the real imaging system.

3.1 Test on the simulated system

To verify the accuracy of our calibration method, standard optical simulation software [35] is used to simulate the imaging results of the plenoptic camera and underwater plenoptic camera with square-arranged MLA. To verify the effectiveness of the proposed optimization model in improving the image clarity and FOV, we first compared four cases corresponding to four stages in Fig. 5 using simulation: Plenoptic Camera, which corresponds to the input plenoptic camera in Fig. 5 without the spherical shell serving as the reference; Adding a Spherical Shell Directly, which corresponds to the step in Fig. 5 to build an underwater plenoptic camera without any optimization; Clarity Optimization represents optimizing the underwater plenoptic camera using Eq. (5) and Eq. (8); Joint Optimization represents optimizing the underwater plenoptic camera using Eq. (5), Eq. (8) and Eq. (13).

The reference case Plenoptic Camera is a focused plenoptic camera with an ideal lens capable of capturing 3D information [14], whose angular resolution equals to 3.13 and depth resolution equals to 30 mm, given 800 mm as object distance ${d_{OBJ}}$ and 1/2 pixel as minimum distinguishable parallax change. The values of its optical parameters are listed in Table 1, where the position of the main lens represents the principal point of the main lens.

Tables Icon

Table 1. Optical Parameters Values of Plenoptic Camera

Table 2 lists the values of optimized variables e and $\Delta v$ in the other 3 testing cases. As shown in the table, in the case of Adding a Spherical Shell Directly, optimized variable e is set as 30 mm and $\Delta v$ is 0 mm, which is also the initial value of subsequent optimization. After Clarity Optimization, optimized variable e remains 30 mm and $\Delta v$ is derived as 1.745 mm. Then, setting ${l_{bound}}$ to 0 mm in the simulation and using Joint Optimization, the optimized variables e and $\Delta v$ are derived as 0 mm and 2.205 mm, respectively. It is worth mentioning that while we have specifically set 30 mm as one of the distance values for e, it serves as a representative and effective point of comparison. Our findings indicate that other distances yield similar results, further reinforcing the validity of our observations.

Tables Icon

Table 2. Values of Optimized Variables in the other 3 Testing Cases

Image clarity and FOV of underwater plenoptic cameras in the other 3 testing cases are compared by simulation using corresponding values of optimization variables in Table 2. Three objects with representative textures are simulated and compared, within which the ChArUco calibration board with regular grids and sharp edges provides a striking contrast, two fishes with sharp and colorful textures demonstrate the robustness of the method. Four-step operations including simulation, rendering, image processing and evaluation are used to quantitatively evaluate the image clarity of simulated underwater plenoptic images. Using the ChArUco calibration board in Fig. 7(a) as an instance, first, it is used as an object for simulating the plenoptic images. Secondly, a rendering algorithm based on appropriate patches [28] is used to obtain central subaperture images, which uses SSIM [36] as an evaluation to pick the appropriate patch size as shown in Fig. 7(b) and stitches extracted patch to produce the rendered subaperture images as shown in Fig. 7(c). Then, pixel value ranges of each subaperture image are normalized to [0, 255] so that the comparisons are not disturbed by light transmission efficiency. After the above operations, the Energy of Gradient (EOG) [37,38] is chosen as a criterion to evaluate the clarity of images due to its simplicity and accuracy. Let $f({x,\; y} )$ represent the value of pixel $({x,y} )$, EOG can be calculated as:

$$EOG = \mathop \sum \nolimits_x \mathop \sum \nolimits_y \{{{{[{f({x + 1,\; y} )- f({x,\; y} )} ]}^2} + {{[{f({x,\; y + 1} )- f({x,\; y} )} ]}^2}} \}$$

 figure: Fig. 7.

Fig. 7. (a) ChArUco calibration board; (b) best-matched patch and corresponding calculation method of patch size; (c) rendered subaperture images captured by a simulated plenoptic camera before adding a spherical shell in air condition.

Download Full Size | PDF

A larger EOG means that the corresponding image has higher clarity than a smaller EOG. Besides, subjective evaluation is used to compare FOVs, the difference between which are significant in different situations. All tested subaperture images and their corresponding EOGs are shown in the even rows of Fig. 8.

 figure: Fig. 8.

Fig. 8. Simulated plenoptic images and corresponding central subaperture images of: (a) and (b) ChArUco calibration board; (c) and (d) guppy; (e) and (f) Oscar fish under different testing cases.

Download Full Size | PDF

Figure 8 (a)-(b) shows the simulated plenoptic images of the ChArUco calibration board and corresponding central subaperture images. As shown in the figure, the EOG of the subaperture images simulated using parameters with Clarity Optimization is 5380, which is 2.34 times higher than that of the subaperture images simulated using parameters with Adding Spherical Shell Directly, and present a less blurry effect, which demonstrates the effectiveness of Clarity Optimization. Moreover, the EOG of the subaperture images simulated using parameters with Joint Optimization is 6200, which is much higher than that of Clarity Optimization, at the same time FOV of underwater plenoptic images with Joint Optimization stays much closer to the FOV of the Plenoptic Camera used in the air than that of other two situations, which substantiates the effectiveness of our method in improving image clarity and optimizing FOV.

Figure 8 (c)-(f) shows the simulated plenoptic images of the fishes and corresponding central subaperture images. As shown in the figure, the EOGs of underwater plenoptic images with Clarity Optimization and Joint Optimization are 2.52 times and 5.44 times that of simulated central subaperture images using parameters with Adding Spherical Shell Directly, and at the same time, the subaperture images simulated using parameters with Joint Optimization shows less blur and keeps more consistent FOV with Plenoptic Camera used in air under subjective evaluation, which shows the robustness of our method in optimizing FOV and image clarity with characteristics of underwater life.

3.2 Test on a real imaging system

To further demonstrate the effectiveness and robustness of our optimization model, the optimization model is applied to a real self-designed focused plenoptic camera [9] in Fig. 9(a). Our calibration method [28,29] is used to obtain the physical parameters of the plenoptic camera, which are listed in Table 3. Like simulation, the focused plenoptic camera can be used to perform 3D reconstruction in the air because its angular resolution is 3.26 and its depth resolution is 36 mm while setting the object distance to be 800 mm and using 1/2 pixel as the minimum distinguishable parallax change [14].

 figure: Fig. 9.

Fig. 9. Real imaging system. (a) Inner self-designed focused plenoptic camera; (b) self-designed underwater focused plenoptic camera; (c) configurations of underwater experiments.

Download Full Size | PDF

Tables Icon

Table 3. Optical Parameters Values of Self-designed Focused Plenoptic Camera

In real experiments, the quantitative shift of e is not easy, so we only provide the comparisons between parameters of the cases Adding Spherical Shell Directly and Joint Optimization, which is enough to further prove the effectiveness of our proposed optimization model. The models Adding Spherical Shell Directly and Joint Optimization are consistent with the simulation experiments. In real experiments, ${l_{bound}}$ is calibrated to be 19 mm using the proposed calibration method in Eq. (12), corresponding values of optimized variables, including which set in the case Adding Spherical Shell Directly and derived using Joint Optimization, are listed in Table 4. The assembled underwater plenoptic camera with an acrylic spherical shell is shown in Fig. 9(b).

Tables Icon

Table 4. Values of Optimized Variables in 2 Test Cases

Using the values of the optimized variables in Table 4, underwater plenoptic cameras in 2 test cases are compared. Four typical real underwater scenes including coral, fishes and starfishes are established to test the actual performance of our self-designed underwater plenoptic cameras, we place these objects in an object distance of around 800 mm. An example of real scene configuration is shown in Fig. 9(c). Like the simulation, the results are also quantitively evaluated by comparing the image clarity and richness of textures, and the evaluation process of the real imaging system is slightly different from that of the simulation in the third step, to eliminate the difference in light transmission efficiency, we preprocess the subaperture images of the underwater plenoptic camera without optimization to keep their luminance consistent with the subaperture images of the optimized underwater plenoptic camera. Since the FOV optimization has been tested effectively in the simulation and there is no reference in real experiments, FOV is not compared in real experiments.

Real experiment results are shown in Fig. 10. In the figure, the EOGs of central subaperture images captured by the underwater plenoptic cameras Adding Spherical Shell Directly and with Joint Optimization are displayed below the corresponding subaperture images. From the calculated EOGs, the EOGs of subaperture images captured by the underwater plenoptic camera with Joint Optimization are all around 3 times that captured by the underwater plenoptic camera Adding Spherical Shell Directly. In addition, as shown in the figure, the subaperture images captured by the optimized plenoptic camera present a less blurry effect and more delicate textures. The results of EOG and subjective evaluation indicate that our optimization method is very effective and robust for underwater focused plenoptic cameras, which greatly benefits the underwater applications of focused plenoptic cameras.

 figure: Fig. 10.

Fig. 10. Real experiment results. Underwater plenoptic images and corresponding central subaperture images of: (a) starfish; (b) fish; (c) crucian; and (d) coral.

Download Full Size | PDF

3.3 Refocusing experiment on underwater plenoptic images

To evaluate the refocusing capabilities of our optimized underwater plenoptic cameras, a real underwater scene is created, consisting of two fish and grass. These fish are positioned at different object distances, with one fish placed around 500 mm as the foreground element in the upper half of the image, while the other fish and grass are placed around 900 mm as background elements in the lower half of the image. Figure 11 (a) illustrates an example of an underwater plenoptic image captured in this setup.

 figure: Fig. 11.

Fig. 11. Underwater plenoptic images with multiple depths and corresponding depth map, rendering results.

Download Full Size | PDF

In our study, two approaches for obtaining depth maps from plenoptic images are considered: direct estimation from original plenoptic images [9] and estimation from rendered multiview images [39,40]. We specifically employ the original plenoptic images to extract depth maps and enable refocusing for the underwater plenoptic images. The procedure for extracting depth maps and performing refocusing remains consistent with the methodology described in [9]. Thus, the resulting depth maps and refocusing images correspond directly to the virtual object distance of the inner plenoptic camera, disregarding aberrations, rather than the actual object distance that can be calculated as a function of the virtual object distance of the inner plenoptic camera.

Similar to Fig. 7, we employ the structural similarity index (SSIM) as a criterion to generate an array of patch size values for each microlens image, which also represents the depth map of the plenoptic image depicted in Fig. 11(b). The resolution of the depth map is consistent with the number of microlenses in the microlens array (MLA), which is relatively small due to the limited number of microlenses in our specific MLA design.

Considering that the refocusing capability of a plenoptic 2.0 camera is dependent on the selection of the pitch size [9], we utilize small and large patches to render subaperture images, as depicted in Fig. 11(c) and Fig. 11(d). From the figure, it is evident that when a small patch is employed, the foreground object (located in the upper half of the image) appears clear, while the background objects (such as grass and fish in the lower half) exhibit block artifacts. Conversely, when a large patch is used, the results are reversed, with the foreground object exhibiting block artifacts and the background objects appearing clear.

By incorporating the depth map in Fig. 11(b) during the rendering process, we can obtain artifact-free rendering aperture images of the scene, as depicted in Fig. 11(e). These images represent an “all-in-focus” view of the scene, where both foreground and background objects are captured with optimal clarity and fewer block artifacts.

4. Conclusion

Underwater plenoptic cameras can be used for a variety of underwater tasks, such as underwater 3D reconstruction and underwater inspection. In this paper, an optimization model is proposed to reduce the degeneration problems during the design of the underwater plenoptic camera. By analyzing the imaging process of the underwater plenoptic camera and the unique optical properties of the spherical shell in water, a paraxial ray propagation analysis for the spherical shell is proposed, and the overall geometric optics model of the underwater plenoptic camera is given. An optimization model is then proposed to compensate for the change in image clarity and FOV caused by the spherical shell and water while guaranteeing a successful assembly. Simulated and real system results show that our proposed method can effectively compensate for the performance degradation caused for the spherical shell and water, which is beneficial for the underwater use of plenoptic cameras. In the future, our work will focus on advancing the accuracy of depth map estimation obtained from rendered multiview images particularly by accounting for aberrations, investigating postprocessing techniques aimed at compensating for image blurring caused by wave-induced disturbances and using wave optics and Fourier optics to further improve the performance of underwater cameras.

Funding

Shenzhen Project, China (JSGG20210802154807022); National Natural Science Foundation of China (61991451).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. R. Yoerger, A. M. Bradley, B. B. Walden, M. -H. Cormier, and W. B. F. Ryan, “Fine-scale seafloor survey in rugged deep-ocean terrain with an autonomous robot,” in International Conference on Robotics and Automation (Cat. No. 00CH37065) (IEEE, 2000), pp. 1787–1792.

2. J. Xiong and W. Heidrich, “In-the-wild single camera 3D reconstruction through moving water surfaces,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (2021), pp. 12558–12567.

3. N. Hurtós, X. Cufí, and J. Salvi, “Calibration of the optical camera coupled to acoustic multibeam for underwater 3D scene reconstruction,” in OCEANS'10 (IEEE, 2010), pp. 1–7.

4. S. T. Digumarti, G. Chaurasia, A. Taneja, R. Siegwart, A. Thomas, and P. Beardsley, “Underwater 3D capture using a low-cost commercial depth camera,” in Winter Conference on Applications of Computer Vision (IEEE, 2016), pp. 1–9.

5. M. Prats, J. J. Fernández, and P. J. Sanz, “Combining template tracking and laser peak detection for 3D reconstruction and grasping in underwater environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 106–112.

6. C. Beall, B. J. Lawrence, V. Ila, and F. Dellaert, “3D reconstruction of underwater structures,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010), pp. 4418–4423.

7. Z. Ma and S. Liu, “A review of 3D reconstruction techniques in civil engineering and their applications,” Adv. Eng. Inform. 37, 163–174 (2018). [CrossRef]  

8. H. Wang, X. Zhao, and X. Yuan, “3D path planning of underwater robot based on sparrow search algorithm with potential field heuristic,” Proc. SPIE 12288, 122881U (2022). [CrossRef]  

9. T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and rendering,” J. Electron. Imaging 19(2), 021106 (2010). [CrossRef]  

10. F. Menna, E. Nocerino, F. Fassi, and F. Remondino, “Geometric and optic characterization of a hemispherical dome port for underwater photogrammetry,” Sensors 16(1), 48 (2016). [CrossRef]  

11. F. Menna, E. Nocerino, and F. Remondino, “Flat versus hemispherical dome ports in underwater photogrammetry,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 42, 481–487 (2017). [CrossRef]  

12. Y. T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Process. 26(4), 1579–1594 (2017). [CrossRef]  

13. M. Jian, X. Liu, H. Luo, X. Lu, H. Yu, and J. Dong, “Underwater image processing and analysis: A review,” Signal Processing: Image Communication 91, 116088 (2021). [CrossRef]  

14. C. Perwaß and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012). [CrossRef]  

15. V. Drazic, J.-J. Sacré, J. Bertrand, A. Schubert, and E. Blondé, “Optimal design and critical analysis of a high resolution video plenoptic demonstrator,” Proc. SPIE 7863, 786318 (2011). [CrossRef]  

16. T. Li, S. Li, Y. Yuan, Y. Liu, C. Xu, Y. Shuai, and H. Tan, “Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method,” Opt. Express 25(7), 8274–8287 (2017). [CrossRef]  

17. T. Michels and R. Koch, “Ray Tracing-Guided Design of Plenoptic Cameras,” in International Conference on 3D Vision (2021), pp. 1125–1133.

18. R. Qu, J. Yang, J. Cao, and B. Liu, “Design of underwater large field of view zoom optical system,” Infrared and Laser Engineering 50(7), 20200468 (2021). [CrossRef]  

19. “Guatek”. www.guatek.com.

20. H. Zhang, B. Su, J. He, C. Zhang, Y. Wu, S. Zhang, and C. Zhang, “Light field imaging and application analysis in THz,” Proc. SPIE 10623, 1062300 (2018). [CrossRef]  

21. M. Born and W. Wolf, Principles of Optics (Pergamon, 1980), Chap. 4.

22. T. Li, Geometrical Optics, Aberrations and Optical Design (Zhejiang University, 2003), Chap. 2.

23. J. L. Cruz-Campa, M. Okandan, M. L. Busse, and G. N. Nielson, “Microlens rapid prototyping technique with capability for wide variation in lens diameter and focal length,” Microelectron. Eng. 87(11), 2376–2381 (2010). [CrossRef]  

24. X. Q. Liu, L. Yu, S. N. Yang, Q. D. Chen, L. Wang, S. Juodkazis, and H. B. Sun, “Optical nanofabrication of concave microlens arrays,” Laser Photon. Rev. 13(5), 1800272 (2019). [CrossRef]  

25. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in International Conference on Computational Photography (IEEE, 2009), pp. 1–8.

26. O. N. Stavroudis, The Optics of Rays, Wavefronts, and Caustics (Elsevier, 2012), Chap. 6.

27. M. She, D. Nakath, Y. Song, and K. Köser, “Refractive geometry for underwater domes,” ISPRS-J. Photogramm. Remote Sens. 183, 525–540 (2022). [CrossRef]  

28. X. Jin, X. Sun, and C. Li, “Geometry parameter calibration for focused plenoptic cameras,” Opt. Express 28(3), 3428–3441 (2020). [CrossRef]  

29. X. Sun, X. Jin, P. Wang, Y. Chen, and Q. Dai, “Blind calibration for focused plenoptic cameras,” in International Conference on Multimedia and Expo (IEEE, 2019), pp. 115–120.

30. H. Richard and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).

31. T. P. Dale and J. H. Gladstone, “On the influence of temperature on the refraction of light,” Phil. Trans. R. Soc. 148, 887–894 (1858). [CrossRef]  

32. J. V. Leyendekkers and R. J. Hunter, “The Tammann-Tait-Gibson model for aqueous electrolyte solutions. Application to the refractive index,” J. Phys. Chem. 81(17), 1657–1663 (1977). [CrossRef]  

33. M. She, Y. Song, J. Mohrmann, and K. Köser, “Adjustment and calibration of dome port camera systems for underwater vision,” In Pattern Recognition: 41st DAGM German Conference (2019), pp. 79–92.

34. T. Georgiev and A. Lumsdaine, “Depth of Field in Plenoptic Cameras,” in Eurographics 2009—Annex (2009), pp. 5–8.

35. “Zemax,” http://www.zemax.com/products.

36. A. Hores and Z. Djemel, “Image quality metrics: PSNR vs. SSIM,” in 20th International Conference on Pattern Recognition (2010), pp. 2366–2369.

37. F. Chen, J. Zhang, J. Cai, T. Xu, G. Lu, and X. Peng, “Infrared image adaptive enhancement guided by the energy of gradient transformation and multiscale image fusion,” Appl. Sci. 10(18), 6262 (2020). [CrossRef]  

38. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “Completely Blind” Image Quality Analyzer,” IEEE Signal Process. Lett. 20(3), 209–212 (2013). [CrossRef]  

39. D. Bonatto, S. Fachada, T. Senoh, G. Jiang, X. Jin, G. Lafruit, and M. Teratani,. “Multiview from micro-lens image of multi-focused plenoptic camera,” in International Conference on 3D Immersion (2021), pp. 1–8.

40. S. Fachada, A. Losfeld, T. Senoh, G. Lafruit, and M. Teratani, “A Calibration Method for Subaperture Views of Plenoptic 2.0 Camera Arrays,” in 23rd International Workshop on Multimedia Signal Processing (IEEE, 2021), pp. 1–6.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The configuration of: (a) the focused plenoptic camera and corresponding subaperture image crop; and (b) the underwater focused plenoptic camera adding a spherical shell and its corresponding subaperture image crop.
Fig. 2.
Fig. 2. Ray propagation process of Galileo-mode underwater plenoptic camera.
Fig. 3.
Fig. 3. The relationship between FOV of the inner camera and underwater camera for different e.
Fig. 4.
Fig. 4. The diagram of the underwater plenoptic camera with a commercial lens.
Fig. 5.
Fig. 5. The process of optimizing an underwater plenoptic camera.
Fig. 6.
Fig. 6. Relationship between object distance and blur circle.
Fig. 7.
Fig. 7. (a) ChArUco calibration board; (b) best-matched patch and corresponding calculation method of patch size; (c) rendered subaperture images captured by a simulated plenoptic camera before adding a spherical shell in air condition.
Fig. 8.
Fig. 8. Simulated plenoptic images and corresponding central subaperture images of: (a) and (b) ChArUco calibration board; (c) and (d) guppy; (e) and (f) Oscar fish under different testing cases.
Fig. 9.
Fig. 9. Real imaging system. (a) Inner self-designed focused plenoptic camera; (b) self-designed underwater focused plenoptic camera; (c) configurations of underwater experiments.
Fig. 10.
Fig. 10. Real experiment results. Underwater plenoptic images and corresponding central subaperture images of: (a) starfish; (b) fish; (c) crucian; and (d) coral.
Fig. 11.
Fig. 11. Underwater plenoptic images with multiple depths and corresponding depth map, rendering results.

Tables (4)

Tables Icon

Table 1. Optical Parameters Values of Plenoptic Camera

Tables Icon

Table 2. Values of Optimized Variables in the other 3 Testing Cases

Tables Icon

Table 3. Optical Parameters Values of Self-designed Focused Plenoptic Camera

Tables Icon

Table 4. Values of Optimized Variables in 2 Test Cases

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

n S H E L L l O A n W A T E R d O B J _ A = ( n S H E L L n W A T E R ) l O C
l O A = n S H E L L ( n S H E L L n W A T E R ) l O C + n W A T E R d O B J _ A
l O A = n A I R ( n A I R n S H E L L ) l O C + n S H E L L l O A
l O A = n A I R C 1 + n S H E L L ( C 2 d O B J _ A n W A T E R ) n S H E L L d O B J _ A l O O ( C 2 d O B J _ A n W A T E R )
l A M = l O A + l O C + e ,
1 a + 1 b = 1 f M L A
a b = N ,
1 l A M + 1 v + Δ v 1 f = 0 ,
a B = b D M L A D M L A l P 0 P 1 ,
v B = a B + d
1 l B C + 1 v B = 1 f ,
l b o u n d = l B C l O C l O B ,
e = { 0 , l b o u n d < 0 l b o u n d , l b o u n d 0
v + Δ v a D a p e r + Δ D a p e r = b D M L A ,
Δ v 1 = v A D J _ 1 v ,
l P 0 P 1 = a b a D M L A ,
1 l D M + 1 v D = 1 f .
1 a D + 1 b D = 1 f M L A .
δ D D M L A = b D b b D ,
E O G = x y { [ f ( x + 1 , y ) f ( x , y ) ] 2 + [ f ( x , y + 1 ) f ( x , y ) ] 2 }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.