Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Flexible foveated imaging using a single Risley-prism imaging system

Open Access Open Access

Abstract

Foveated imaging, which has the ability to provide overall situational awareness over a large field of view and high-resolution perception of local details, has significant advantages in many specific applications. However, existing artificially foveated imaging systems are complex, bulky, and expensive, and the flexibility of the fovea specifically has many limitations. To overcome these deficiencies, this paper proposes a method for foveated imaging by collecting multiple partially overlapping sub-fields of view. To capture the above special sub-fields of view, we propose a high-efficiency algorithm based on the characteristics of the field of view deflected by the Risley-prism and aimed at solving the prism rotation angles. In addition, we prove the reliability of the proposed algorithm by cross-validation with the particle swarm optimization algorithm. The experimental results show that the proposed method can achieve flexible foveated imaging using a single Risley-prism imaging system.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optesthesia is the main means by which humans obtain information. Benefiting from its excellent optical structure and visual information perception mechanism, the human eye can obtain visual information quickly and efficiently through foveated imaging. Foveated imaging is a type of multi-resolution imaging method, which has a small high-resolution region to achieve accurate observation of the region of interest (ROI), known as the “foveal region” or “fovea”, and large peripheral regions with lower resolution to achieve an overall perception of the global field of view (FOV). To some extent, foveated imaging reduces the burden on the brain in processing information and simplifies the optical structure of high-resolution imaging of the human eye. Foveated imaging has very broad application prospects [1]. Foveated imaging can effectively use a limited number of detector pixels to obtain more valuable image information. In addition, it can be achieved by back-end data processing and improve the efficiency of information transmission. To date, foveated imaging systems have been widely used in marine search and rescue, ground target detection, land measurement, military reconnaissance, and other fields. Furthermore, modern industrial and military applications require more flexible foveated imaging. That is, the position and size of the fovea must have a larger variation range within the total FOV to improve the ability of scene detail identification.

A variety of foveated imaging systems have been developed, and they can be classified into two basic types depending on how the foveated images are obtained: optical system design-based and digital image processing-based (main methods of back-end data processing) [2]. Devices based on optical structures obtain multi-resolution images through special imaging detectors or optical elements, including non-uniform imaging detectors [3,4], spatial light modulators (SLMs) [5,6], liquid crystal lenses [7], and direct superposition of FOV with different focal lengths [8,9]. However, such systems are normally complicated, expensive, limited in FOV expansion, and have difficult fovea adjustment. By valid arrangement of the structure of a multi-aperture array or by adding accessories, such as prisms or free lenses, in front of each sub-aperture, a number of sub-images with different overlapping information can be collected, and a foveated image can be obtained through subsequent digital image processing [1012]. However, the development of foveated imaging technology based on digital image processing is seriously restricted by whether the original data can be effectively reduced. In recent years, beam scanning equipment has been increasingly used in the control of imaging FOV and seems to provide a new way to solve this problem. For example, a Risley-prism imaging system (RPIS) with high beam deflection capability and control accuracy can precisely control the position and overlapping area of each sub-FOV, thus compressing the amount of raw data [1315]. In research on Risley-prisms, the coupling of a fixed camera array and RPIS is the main strategy to achieve foveated imaging [16,17]. The fixed camera array is used to obtain the overall wide FOV, while the RPIS only achieves local super-resolution in total FOV. Although these studies about Risley system can achieve satisfactory foveated imaging, all of them separate extension of the imaging FOV from achieving super-resolution, and fail to adequately take advantage of the beam control capabilities of an RPIS. This means that they need to collect two sets of data for FOV expansion and local super-resolution respectively, resulting that the complex multi-aperture structure fails to adequately reduce the amount of raw data for foveated imaging. In addition, the fovea has a fixed size and can only move in a small area near the center of the total FOV, which does not have the ability to adjust flexibly. There are also studies trying to use a single RPIS to achieve FOV extension and local super-resolution respectively, but only separate experiments have been conducted on the two technologies [18].

To solve these problems and achieve flexible foveated imaging, this paper proposes a foveated imaging strategy using only an RPIS. In this paper, the RPIS is used to collect n sub-FOVs that are deflected to different positions by adjusting the rotation angles of the Risley-prism. These sub-FOVs all contain the ROI and have as much “differential information” as possible. This so-called differential information has two meanings. One is the different information in the peripheral region; each sub-FOV is deflected in a different direction, so the non-overlapping region between the sub-images records the information of different spatial positions. The other is different information within the ROI; because of the nonlinear distortion introduced by prisms, the resolution of different regions within each sub-image is different, and sub-pixel offset appears among the sub-images. According to this different information, super-resolution reconstruction and image stitching technology can be used to obtain a foveated image with both large FOV and local high resolution. In addition, it is worth noting that n≥4 is generally required to ensure the effect of FOV extension and local super-resolution. In this study, n=9 was selected.

It is effortless to include the ROI in all sub-images captured by an RPIS. However, to obtain sub-images with more differential information, especially to enlarge the total FOV by increasing the non-overlapping region of each sub-image outside the ROI, the position of each deflected sub-FOV still requires elaborate planning. In order to achieve both FOV expansion and local super-resolution using a set of data, the core of this study is to optimize the n groups of Risley-prism rotation angles to maximize the total imaging FOV under the constraint that the ROI is covered by every sub-FOV.

To solve this optimization problem, this paper proposes an evaluation index for the foveated imaging performance of RPIS according to its imaging characteristics. To improve the evaluation index as a guide, this paper proposes a Risley-prism rotation angle solving algorithm based on the characteristics of the deflected FOV. To verify the effectiveness and robustness of the proposed algorithm, we introduce a particle swarm optimization (PSO) algorithm for cross-validation. Both simulations and experiments verified the reliability of the foveated imaging strategy and the flexibility of fovea selection.

The remainder of this paper is organized as follows. In Section 2, the basic theoretical model of the study and the characteristics of the deflected FOV are introduced. In Section 3, the evaluation index of RPIS foveated imaging is proposed, and an algorithm to solve the Risley-prism rotation angles is proposed. In Section 4, the fitness function of the PSO algorithm is constructed, and the two algorithms are used for cross-validation. In Section 5, the foveated imaging experiments are reported. Finally, conclusions are drawn in Section 6.

2. Theoretical RPIS model

Figure 1 shows a typical RPIS, which consists of a stationary camera and rotational double-prism system. The two prisms Π1 and Π2 can rotate independently around the Z-axis. The principal cross sections of the two prisms are initially located in the XOZ plane, with their thickest ends pointing towards the positive X-direction. The rotation angles of the two prisms are θ1 and θ2, respectively. The wedge angles of the prisms are α1 and α2, and the refractive indices are n1 and n2, respectively. Based on the independent rotation of the two prisms, the camera boresight (red dotted line) is altered to a specific cone to change the FOV (blue rectangle). The gray dotted rectangle in Fig. 1 represents the camera’s original FOV, and the area enclosed by the blue curves represents the FOV deflected by the Risley-prism, which changes in shape, size, and orientation. The deflection of the beam is usually evaluated by the combination of altitude angle Φ and azimuth angle Θ, and the deflection of the FOV is usually reflected by its central beam (camera boresight).

 figure: Fig. 1.

Fig. 1. Typical RPIS.

Download Full Size | PDF

2.1 Reverse ray tracing model of Risley-prism

Considering the camera’s optical axis as being in the positive Z-axis direction, a Cartesian coordinate system can be established at the midpoint of the camera lens. Figure 2 illustrates the propagation process of a beam in the RPIS, where the four surfaces of the Risley-prism are marked in sequence by numbers 1 to 4. The equipment parameters used in this study include wedge angles α1=α2=14.85°, refractive indices n1=n2=1.515, detector resolution 640×480 pixels, and camera FOV angle 51×40°.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the beam propagation process in the RPIS.

Download Full Size | PDF

The normal vectors of each surface are successively defined as N1, N2, N3, and N4 as follows [19]:

$${\boldsymbol{N}_1} = {[\cos ({\theta _1})\sin (\alpha ),\sin ({\theta _1})\sin (\alpha ),\cos (\alpha )]^{\textrm{T}}}{,}$$
$${\boldsymbol{N}_2} = {[0,0,1]^{\textrm{T}}},$$
$${\boldsymbol{N}_3} = {[0,0,1]^{\textrm{T}}}{,}$$
$${\boldsymbol{N}_4} = {[ - \cos ({\theta _1})\sin (\alpha ), - \sin ({\theta _1})\sin (\alpha ),\cos (\alpha )]^{\textrm{T}}}.$$

The incident light vector of the Risley-prism is expressed as V=(x, y, f)T, where f denotes the equivalent focal length of the camera, which can be calculated from the sampling resolution and FOV angle. By converting V to a unit vector, the direction vector of the incident light is A0=(x0, y0, z0)T. Then, the unit vectors of the refracted beam passing through the four surfaces, represented as Ai=(xi, yi, zi)T, can be solved by the vector form of Snell’s law [20]:

$${\boldsymbol{A}_i} = \frac{{{n_{i - 1}}}}{{{n_i}}}[{\boldsymbol{A}_{i - 1}} - ({\boldsymbol{A}_{i - 1}} \cdot {\boldsymbol{N}_i}){\boldsymbol{N}_i}] + {\boldsymbol{N}_i}\sqrt {1 - \frac{{{n_{i - 1}}}}{{{n_i}}} - {{\left( {\frac{{{n_{i - 1}}}}{{{n_i}}}} \right)}^2}{{({\boldsymbol{A}_{i - 1}} \cdot {\boldsymbol{N}_i})}^2}} .$$
where i=1,2,3,4, and ni−1 and ni represent the refractive indices of the incident and refractive ends, respectively. Using the direction vectors of the emergent ray A4, we can obtain its altitude angle Φ and azimuth angle Θ as follows:
$$\left\{ {\begin{array}{l} {{\varPhi }\textrm{ = }|{\arccos ({z_4})} |}\\ {{\varTheta } = \textrm{arctan}({{{y_4}} / {{x_4}}})} \end{array}} \right..$$

It is important to note that the range of Θ is [0, 2π], and the value of Θ must be adjusted according to the signs of x4 and y4 in Eq. (6).

When the object distance is L, the intersection point Pr(xr, yr) of the emergent ray of the RPIS and the object plane can be calculated as follows:

$$\left\{ {\begin{array}{l} {{x_r} = ({{{x_4}} / {{z_4}}}) \cdot L}\\ {{y_r} = ({{{y_4}} / {{z_4}}}) \cdot L} \end{array}} \right..$$

2.2 Deflected FOV of RPIS

Figure 3 illustrates the effect of the RPIS on the imaging FOV. The light from the camera sensor is constructed according to the pinhole imaging model. After being refracted by the Risley-prism, the rays with different incident angles have different degrees of deflection, as shown in Fig. 3(a). Furthermore, the angles between rays also change, resulting in nonlinear changes in the shape and resolution of the imaging FOV. When prisms with large wedge angles are used, the original rectangular FOV will be deflected into an irregular shape. In this case, the properties of the deflected FOV can be studied through its boundary, as shown in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. Adjusting imaging FOV by the Risley-prism: (a) Imaging model of the RPIS. (b) Deflected FOVs from different rotation angles.

Download Full Size | PDF

For an imaging detector with known resolution, the incident ray vector of a Risley-prism V=(x, y, f)T can be established by the boundary points P(x, y) and equivalent focal length f. By substituting all boundary ray vectors into Eqs. (1)–(7) and taking the object distance L = f, we can obtain the actual imaging area of the distortional FOV, as shown in Fig. 3(b). The purpose of setting the object distance to f is to project the deflected FOV onto the plane with the same scale as the non-deflected FOV to intuitively compare their differences. The colored curved boxes in Fig. 3(b) represent the deflected FOVs obtained under different rotation angles of the Risley-prism, while the black dashed box represents the original imaging FOV of the camera. It is obvious that the Risley-prism significantly changes the characteristics of the imaging FOV, such as size and shape. In addition, after being deflected by Risley-prism, the positions of the FOV tend to spread outwards from the origin, which is a requirement of foveated imaging.

2.3 Characteristics of the deflected FOV

For a given RPIS, the shape and size of the deflected FOV depend on the rotation angles of the Risley-prism (θ1, θ2). For convenient description, the rotation angle of prism Π1 is defined as the overall angle of the Risley-prism ψ, and the difference in rotation angles Δθ=θ1-θ2 is called the relative angle of the Risley-prism [21]:

$$\left\{ {\begin{array}{l} {{\theta_1}\textrm{ = }\psi }\\ {{\theta_2}\textrm{ = }\psi - \varDelta \theta } \end{array}} \right..$$

As in beam control applications, the deflected FOV will gradually deviate from the center of the image plane and continuously increases its size and degree of distortion while reducing the relative angle |Δθ|, as shown in Fig. 4(a). When |Δθ|=0, the boresight of the deflected FOV reaches the maximum altitude angle Φmax. Synchronously rotating the two prisms to adjust ψ while keeping |Δθ| unchanged, the deflected FOV will show a motion similar to rotation around the origin, and its size and distortion degree will barely change, as shown in Fig. 4(b) [22].

 figure: Fig. 4.

Fig. 4. Effect of Risley-prism rotation angles on characteristics of the deflected FOV: (a) Only adjusting the relative angle Δθ. (b) Only adjusting the overall angle ψ.

Download Full Size | PDF

Referring to the first-order paraxial approximation method for beam pointing, the orientation of the deflected FOV is defined as θ=(θ1+θ2)/2 [23]. According to the above relations, the effect of adjusting (ψ, Δθ) on the coverage range of the deflected FOV can be summarized according to Table 1, which lists the motion directions of the deflected FOV in the image plane for different rotation angles of the Risley-prism.

Tables Icon

Table 1. Variation relations of the deflected FOV when adjusting the prism rotation angles.a

3. Calculation algorithm for prism rotation angles based on the characteristics of the deflected FOV

For a specific position and size of the ROI, the key issue for the RPIS to achieve flexible foveated imaging is obtaining n deflected sub-FOVs by accurately controlling the Risley-prism, which can maximize the total FOV under the premise that the ROI is included. In this section, we propose an algorithm based on the characteristics of the deflected FOV (referred to as the Characteristic Algorithm) to calculate the rotation angles of the Risley-prism under the condition of n=9. It should be noted that the specific steps of the Characteristic Algorithm will be affected by the device parameters, but its core concept is universal.

3.1 Evaluation method for foveated imaging in RPIS

Unlike typical foveated imaging methods, an RPIS can achieve flexible adjustment of the fovea due to its powerful beam control capability. In addition, the foveal ratios that can be obtained by digital image processing mainly depend on the number of sub-images; therefore, it is not appropriate to use the size of the fovea or the foveal ratios as evaluation criteria. It is worth noting that for a given ROI and number of sub-FOVs, the size of the total imaging FOV depends on the distribution of these sub-FOVs and has a maximum value. Therefore, this study focused on the size of the total imaging FOV as a criterion to evaluate the foveated imaging performance of the RPIS.

Because there is no regular boundary for the deflected FOV, instead of using the FOV angle, the projected area of the total FOV on the image plane was selected as the index to measure the size of the total FOV. Furthermore, the evaluation criterion was defined as the ratio of the total imaging FOV area to the camera’s original FOV area, which we refer to as the “Area Expansion Ratio” and denote by the symbol k.

3.2 Essential theories and initial value construction of the Characteristic Algorithm

For flexible foveated imaging, each sub-FOV in the Characteristic Algorithm should follow two basic principles: dispersion and classification.

To achieve foveated imaging using the RPIS, all sub-FOVs must cover a certain ROI. For ease of understanding, we compress the ROI to a point and simplify the distorted deflection FOV as a rectangle. In this case, the required deflected FOV is essentially a rectangle with one fixed vertex that can only rotate around that vertex. It is obvious that a larger total area can be obtained when the rotation angles of these rectangles are relatively uniformly distributed within [0, 2π], while the total area will be smaller when the rotation angles are relatively compact. The above phenomenon is the source of dispersion. Specifically, dispersion is to distribute the directions from the ROI to the sub-FOVs, which can also be simplified as the orientations of the deflected FOVs, as evenly as possible over [0, 2π].

A single RPIS achieves foveated imaging by collecting multiple deflected sub-FOVs, which have similar properties to multi-aperture systems. When the number of sub-FOVs n is unlimited, the maximum imaging FOV of the RPIS can be obtained through continuous scanning, as shown in Fig. 5(a). The light blue area represents the maximum range that a single RPIS can observe, the black dashed rectangle represents the camera’s original FOV, and the red curve represents the deflected FOV under certain rotation angles of the Risley-prism. From a simple calculation, the maximum Area Expansion Ratio of this RPIS is k=8.586. In addition, the largest inscribed rectangle parallel to the coordinate axis is shown in Fig. 5(a). The FOV angle of the area enclosed by this rectangle can reach 100×81°, indicating that the Risley-prism can effectively expand the imaging FOV.

 figure: Fig. 5.

Fig. 5. Essential theories of the Characteristic Algorithm: (a) Maximum total FOV that the RPIS can capture. (b) Sub-aperture layout of the analogous multi-aperture device. (c) Change in Area Expansion Ratio when adjusting a single FOV. (d) Initial orientation of the sub-FOVs. (e) Coverage of the total FOV under initial conditions. (f) Partition method of the image plane.

Download Full Size | PDF

The layout of sub-apertures of the analogous multi-aperture device for n=9 is shown in Fig. 5(b). The 5th sub-FOV is located at the center and is called the central FOV, while the other eight sub-FOVs are called the peripheral FOVs. When there is no ROI, the central FOV maintains the original imaging state of the camera, that is, θ51=0°and θ52=180°. According to the principle of dispersion, the orientations of the eight peripheral FOVs should be evenly distributed within the range of [0, 2π] while keeping θi1=θi2 to obtain a larger total FOV, as shown in Fig. 5(b). Then, the directions of the 2nd, 4th, 6th, and 8th sub-FOVs coincide with the coordinate axis to maintain symmetry, and the total FOV can be further increased by fine-tuning the directions of the remaining four peripheral FOVs. Because of the symmetry, we only need to analyze some of the sub-FOVs. Fixing the rotation angles of the 2nd and 6th sub-FOVs at θ21=θ22=90° and θ61=θ62=0° and varying the rotation angle of the 3rd sub-FOV from 0 to 90°, the Area Expansion Ratio of the three sub-FOVs can be calculated as shown in Fig. 5(c). The maximum Area Expansion Ratio can be obtained when θ31=θ32=41°, which also proves that dispersion helps to obtain a larger total imaging FOV. Finally, the rotation angles of the Risley-prism corresponding to each sub-aperture are shown in Fig. 5(d), and the total FOV for k=6.148 is shown in Fig. 5(e).

Based on analogy and progression, the rotation angles that without the constraint of ROI in Fig. 5(d) are selected as the initial rotation angles of the Characteristic Algorithm. When the ROI exists, each sub-FOV must be adjusted until it covers the ROI. The basic strategy is for the optical axis of the central FOV to point directly to the target as far as possible and the eight peripheral FOVs to be distributed around the ROI as evenly as possible.

Intuitively, each sub-FOV can be separately adjusted to cover the ROI according to Table 1, and this strategy is feasible when there are few sub-FOVs. However, with an increase in the number of sub-FOVs, the coupling relationship between them will be relatively complex. At this time, if the relationship between sub-FOVs is ignored and sub-FOVs are adjusted independently, the dispersion between them will be destroyed, causing a large amount of overlap outside the ROI. Therefore, the principle of classification is proposed to simplify the coupling relationship between sub-FOVs so as to adjust sub-FOVs independently while maintaining their dispersion as much as possible.

The ROI is observed by projecting onto the image plane, so it is necessary to partition the image plane before classifying these sub-FOVs. Corresponding to the initial directions of the eight peripheral FOVs, the image plane is divided into eight sub-regions, marked as I to VIII in Fig. 5(f). The angles labeled in Fig. 5(f) are the boundaries of the sub-regions. The location of the ROI is expressed by the coordinate of its central point, and any ROI must belong to a certain sub-region. Based on the angular distance between the ROI and sub-region in which the ROI is located, the eight sub-FOVs (except the central FOV) can be divided into three categories: adjacent FOVs, remote FOVs, and transitional FOVs. Figure 5(d) can be used as an illustrative example of this. The ROI is located in area II, which corresponds to the 3rd sub-FOV, and the nearest areas to region II are regions III and I, corresponding to the 2nd and 6th sub-FOVs, respectively. Therefore, the 2nd, 3rd, and 6th sub-FOVs are collectively called adjacent FOVs. The remote FOVs, i.e., the three sub-FOVs furthest from region II, are the 4th, 7th, and 8th sub-FOVs. Finally, the 1st and 9th sub-FOVs sandwiched between the adjacent and remote FOVs are called transitional FOVs.

3.3 Detailed process of the Characteristic Algorithm

The Characteristic Algorithm is briefly summarized in Fig. 6, which mainly includes the initialization of parameters and the solution of rotation angles corresponding to the three types of FOV.

 figure: Fig. 6.

Fig. 6. Brief summary of the Characteristic Algorithm.

Download Full Size | PDF

Parameter initialization was introduced in detail in Section 3.2, so the next step is to adjust the remote FOVs. The initial locations of the remote FOVs can hardly cover the ROI, and there are two main methods to correct them. One is to rotate the remote FOVs around the origin by adjusting ψ, but this strategy will lead to a large overlap between the sub-FOVs. Therefore, we chose to increase the relative angle |Δθ| by rotating the two prisms in opposite directions at equal velocities, as shown in Fig. 7(a). The green rectangular area represents the ROI located in region II, and the red curves represent the initial position of the 4th sub-FOV, which is a remote FOV. The FOV obtained by decreasing ψ is shown as the upper blue curved area. In this case, although the 4th sub-FOV satisfies the requirement of including the ROI, it overlaps too much with the transitional FOV represented by the black dotted line, which is not beneficial to enlarging the total FOV. The FOV obtained by increasing |Δθ| is shown as the nether blue curved area, which not only covers the ROI but avoids generating too much overlap with other sub-FOVs.

 figure: Fig. 7.

Fig. 7. Adjustment method for remote FOVs: (a) General situation. (b) When the ROI crosses the boundary of the camera’s original FOV.

Download Full Size | PDF

When the ROI is beyond the original camera imaging FOV, the remote FOV cannot cover the ROI even when |Δθ| increases from 0 to 180°. To solve this problem, we can continue rotating the two prisms to reduce the relative angle |Δθ| and move the remote FOVs to the reverse orientation, as shown in Fig. 7(b). The green rectangular area represents the ROI of x∈[300, 400], y∈[100, 200], the red curved area represents the initial location of the 4th sub-FOV, the blue curved area represents the adjusted FOV, and the black rectangle represents the original imaging FOV of the camera. If neither method can make the remote FOV cover the ROI, the orientation of the FOV can be directed directly to the center of the ROI under θ1=θ2, and then |Δθ| can be gradually increased until the threshold value is reached.

The two transitional FOVs have a relatively shorter distance to the ROI. It should first be noted that the orientations of the three adjacent FOVs are obtained by evenly dividing the angle Ω (180° under initial conditions) between the two transitional FOVs. Therefore, we hope to obtain a larger Ω while avoiding generating large overlap between the adjacent FOVs and the remote FOVs to obtain a larger total FOV. The adjustment strategy for the transitional FOVs is illustrated in Fig. 8. When a transitional FOV is able to cover the ROI at its initial position, Ω can be increased by adjusting ψ, as shown in Fig. 8(a). The green rectangular area in Fig. 8(a) represents the ROI, the red curve represents the initial position of the transitional FOV (1st sub-FOV), and the arrows indicate the orientation of the transitional FOV. In contrast, ψ should be adjusted along the direction that will decrease Ω to ensure that the ROI can be covered when the transitional FOV cannot cover the ROI at its initial position. As compensation, the reduction of Ω can be relieved by appropriately increasing |Δθ|, as shown in Fig. 8(b). If the two prisms rotate at velocity ω1 to adjust ψ and rotate at velocity ω2 to increase |Δθ|, then the actual velocity of prism Π1 is -ω1-2, while the actual velocity of prism Π2 is -ω1+2, where the coefficient t is a constant that determines whether the adjustment process is focused more on changing ψ or |Δθ|.

 figure: Fig. 8.

Fig. 8. Adjustment strategy for transitional FOVs: (a) Initial state of the transitional FOV can cover the ROI. (b) Initial state cannot cover the ROI.

Download Full Size | PDF

The orientations of the adjacent FOVs are obtained by evenly dividing Ω, and the corresponding prism angles can be acquired by setting θ1=θ2, as shown in Fig. 9. The colored curved boxes represent the adjacent FOVs, the black arrows represent the directions of the two transitional FOVs, and the colored arrows represent the orientations of the adjacent FOVs. When the adjacent FOVs cannot cover the ROI, this can be solved by comprehensively adjusting ψ and |Δθ|, such as when the ROI is not initially inside a transitional FOV.

 figure: Fig. 9.

Fig. 9. Adjustment method for adjacent FOVs.

Download Full Size | PDF

The following is an example formulation of the Characteristic Algorithm for the ROI represented by x∈[50,150] and y∈[50,150], and the rotation angles of the Risley-prism obtained by the Characteristic Algorithm are listed in Table 2. Because the ROI is located in region II, the adjacent FOVs are the 2nd, 3rd, and 6th sub-FOVs, the remote FOVs are the 4th, 7th, and 8th sub-FOVs, and the transitional FOVs are the 1st and 9th sub-FOVs. First, the rotation angles of the central FOV should be obtained according to the inverse solution formula of the Risley-prism [20]. Then, the rotation angles of the Risley-prism for the three remote FOVs are calculated according to the previous method. When optimizing the transitional FOVs, it can be found that neither of them could cover the ROI at the initial position, so we take t=2 for calculation. According to the orientations of the two transitional FOVs (130.79° and −16.71°), the prism rotation angles of the three adjacent FOVs can be directly obtained. Figure 10 illustrates the sub-FOVs obtained by Characteristic Algorithm, among which Fig. 10(a), Fig. 10(b), and Fig. 10(c) plot the remote FOVs, transitional FOVs, and adjacent FOVs, respectively, while Fig. 10(d) plots the blend of all sub-FOVs. In Fig. 10, the light blue area represents the deflected sub-FOVs, the green rectangular area represents the ROI, the black dashed box represents the camera’s original FOV, and the colored curves represent the boundaries of sub-FOVs. The Area Expansion Ratio of the total FOV in Fig. 10(d) is approximately 4.4457, which greatly extends the total FOV under the premise that each sub-FOV covers the ROI.

 figure: Fig. 10.

Fig. 10. Deflection FOV obtained by Characteristic Algorithm: (a) Remote FOVs. (b) Transitional FOVs. (c) Adjacent FOVs. (d) Total imaging FOV obtained by stitching the nine sub-FOVs.

Download Full Size | PDF

Tables Icon

Table 2. Result of the Characteristic Algorithm.

4. Calculation of prism rotation angles based on PSO algorithm

The Characteristic Algorithm has relatively high efficiency, but the principle of classification can only simplify rather than eliminate the coupling of sub-FOVs, which leads the Characteristic Algorithm to generally obtain a relatively optimal solution. In addition, the complexity of coupling makes it difficult to prove the rationality and robustness of the Characteristic Algorithm. Therefore, this section constructs a fitness function and reports the use of the PSO algorithm to solve the rotation angles that satisfy foveated imaging [24]. Then, the feasibility of the Characteristic Algorithm and PSO algorithm is proved by cross-validation, and the characteristics of the two algorithms are analyzed and compared.

4.1 Construction and calculation of the fitness function

The fitness value was calculated by calculating the projection area of all sub-FOVs. However, the complex boundary function and positional relation of the sub-FOVs complicate the calculation of the total area by direct integration. Therefore, this paper proposes a fast calculation method for the fitness value based on the idea of discretization.

Discretization refers to the use of the number of pixel points (integer points) within a region to represent the area of this region. The set A of all integer points in the total FOV can be obtained by calculating the integer point set Ai of each sub-FOV and performing a union operation. Then, the size of set A is the area of the total FOV. If a certain sub-FOV is found to not include the ROI during the calculation, Ai=∅︀ is set as a penalty for the fitness value. Figure 11 illustrated the proposed method for quickly solving all integer points in a deflected FOV, and the specific steps are as follows:

 figure: Fig. 11.

Fig. 11. Calculation method for the range of single-deflection FOV. (a) FOV boundary obtained by curve fitting. (b) The left and right boundaries along the Y direction. (c) The integer points along the Y direction. (d) The upper and lower boundaries along the X direction. (e) The integer points along the X direction. (f) The integer points in the FOV.

Download Full Size | PDF

Step 1. Find the ranges [xmin, xmax] in the X-direction and [ymin, ymax] in the Y-direction from all the boundary points of the deflected FOV.

Step 2. Obtain the functions of the four boundaries by cubic spline interpolation.

Step 3. Obtain the set Aix from the upper and lower boundaries within [xmin, xmax], and obtain the set Aiy from the left and right boundaries within [ymin, ymax].

Step 4. Obtain the set of integer points Ai of the sub-FOVs by the intersection operation

$${A_i} = {A_{ix}} \cap {A_{iy}}.$$

When the set of integer points Ai, i∈[1, n] of each sub-FOV has been solved, the total area S can be obtained by performing the union operation.

$$A = {A_1} \cup {A_2} \cup \ldots \cup {A_n},$$
$$S = {\textrm{size}}(A).$$

Therefore, the fitness function is defined as

$$F(({\theta _{i1}},{\theta _{i2}}),range,para) = S.$$
where (θi1, θi2), i∈[1,n] represents the rotation angles of the Risley-prism corresponding to each sub-FOV. range indicates the location of the ROI, which is represented by the center coordinates and side length. para represents the set of system parameters, such as the number of pixels (M, N) of the detector, the equivalent focal length f, and the basic parameters of the prism.

Obviously, range depends on the position of the object to be observed, while para is an inherent attribute of the device, and both remain constant during the iterations of the PSO algorithm. Therefore, the fitness value S depends completely on (θi1, θi2). According to the definition of the fitness value, the size of the total FOV is positively correlated with the fitness value, and the ultimate goal of the optimization algorithm is to find the (θi1, θi2) values that maximize the fitness value.

4.2 Cross-validation of the two algorithms

Although it requires a large number of iterations and has a risk of convergence to a local extreme value, the PSO algorithm can approach the real optimal solution infinitely. Taking advantage of this, sufficient iterations were performed using the PSO algorithm, and the two algorithms were compared. The cross-validation of the two algorithms not only proves their correctness but also reflects the excellent comprehensive performance of the Characteristic Algorithm.

In this study, the standard PSO algorithm was selected with learning factors c1=c2=2, linear inertia weight ω∈[0.4, 0.9] and particle number p=100. Under the presupposition that the center coordinates of the ROI are (100, 100) and the side length is 100 pixels, Fig. 12 illustrates the iterative process of the PSO algorithm, which has achieved the highest fitness value and relatively fast convergence in extensive experiments. It has been proven that PSO can achieve convergence through sufficient iterations, and the convergence value in Fig. 12 is 4.5830, which is only 3.09% higher than that of the Characteristic Algorithm. The fact that the convergence value of the PSO algorithm is only slightly different from that of the Characteristic Algorithm well proves the reliability of the two algorithms.

 figure: Fig. 12.

Fig. 12. Iterative process of the PSO algorithm.

Download Full Size | PDF

Furthermore, Fig. 13 illustrates the difference between Characteristic Algorithm and PSO algorithm. Figure 13(a) compares the two algorithms under the same ROI as that in Fig. 12, where the colored curves represent three independent PSO experiments, and the black dotted line represents the results obtained by the Characteristic Algorithm. Figure 13(b) compares the algorithms for an ROI that is farther from the origin of the image plane with central coordinates (250, 250) but the same side length of 100 pixels.

 figure: Fig. 13.

Fig. 13. Comparison of the Characteristic Algorithm and PSO algorithm under different ROIs: (a) ROI with center coordinates (100, 100) and side length 100 pixels. (b) ROI with center coordinates (250, 250) and side length 100 pixels.

Download Full Size | PDF

The different PSO results have relatively different initial values and convergence rates, but they all show an overall upward trend of the fitness value and can reach convergence after sufficient iterations. However, the convergence process often requires thousands to millions of iterations, and Fig. 13 only covers the first 10,000 iterations.

Figure 13 shows that the PSO algorithm can theoretically obtain better solutions than the Characteristic Algorithm and has stronger generality. However, the large number of iterations also significantly reduces the calculation efficiency of PSO, which means that a database is required to realize real-time capability. In contrast, the Characteristic Algorithm must be adjusted according to specific equipment parameters, so it has incomparable efficiency and can realize real-time calculation. The results of the Characteristic Algorithm are slightly worse than those of the PSO algorithm, but its efficiency makes up for this shortcoming, and the slight gap is often negligible. In summary, the two algorithms have their own advantages and both can achieve the goal of calculating rotation angles that satisfy foveated imaging.

It is worth noting that the convergence characteristics of the PSO algorithm largely depend on its initial values, and a reasonable initial value selection can effectively improve the initial fitness value and convergence rate of PSO. In this study, we greatly improved the initial fitness value of PSO by calculating the feasible solution space in advance. Of course, we can also use the solution of the Characteristic Algorithm to construct the initial value of PSO. Experiments have also proved that this strategy can significantly improve the convergence rate of PSO and greatly reduce the probability of PSO convergence to a poor local extreme value. However, if PSO is used, the decrease in real-time performance cannot be avoided. Therefore, a specific computing algorithm should be selected according to the overall strategy (real-time computing or database construction).

5. Experimental validation

In the previous section, we proved the feasibility of the rotation angle solving algorithm through theoretical analysis. In theory, foveated imaging can be achieved by collecting sub-images at these prism rotation angles. In this section, experiments are reported to further verify whether these sub-images meet the requirements of foveated imaging, that is, whether all sub-images contain the ROI and have a large total stitched FOV.

A picture of the RPIS used in the experiments is shown in Fig. 14. The two wedge prisms have the same wedge angle α1=α2=14.85° and refractive index n1=n2=1.515. The wat-704r camera used in the RPIS has an FOV of 51×40°, and its output resolution is 640×480 pixels. Due to the use of prism with large wedge angle and camera with large FOV, chromatic dispersion will seriously affect the image quality of RPIS. In order to minimize the impact of chromatic dispersion, an additional 610 nm longpass colored glass filter is added before the RPIS. The main reason for not using achromatic prism is to consider the compactness of the system structure and avoid the occlusion of the FOV due to the large axial size of the system.

 figure: Fig. 14.

Fig. 14. RPIS used in the experiment. (1) Risley-prism setup; (2) CCD camera; (3) longpass colored glass filter; (4) stepping motor; (5) computer.

Download Full Size | PDF

The two wedge prisms are driven by the gear and can be independently rotated to any angle between 0 and 2π, while the precise control of the prism rotation angles is realized by the motor controller. In addition, the rotation angle solving algorithm and image processing algorithms are all integrated into the upper computer.

The general procedure for foveated imaging using a single RPIS proposed in this paper is shown in Fig. 15. The application of RPISs for imaging FOV extension or super-resolution has been well studied and good results have been achieved [18,25]. Therefore, the specific digital image processing method, including distortion correction, imaging stitching and super-resolution algorithms, will not be described too much in this paper. The research will focus more on the information collection on the front-end, which aims to verify whether the proposed algorithm can use the RPIS to collect image materials that meet the requirements of flexible foveated imaging.

 figure: Fig. 15.

Fig. 15. Summary of the proposed procedure for foveated imaging using a single RPIS.

Download Full Size | PDF

According to Fig. 15, all sub-FOVs are initially preset to their initial positions. Every sub-FOV well-satisfies the principle of dispersion at this time, and the stitched image has a considerable total FOV. However, the overlapping regions between sub-FOVs are different in the initial state, and it is difficult to further improve the resolution of a specific ROI.

To obtain a better foveated imaging, an ROI within the total FOV should be selected at first. There are many methods to find a target or ROI in camera’s FOV, such as frame difference method, edge detection, neural network, etc. The purpose of the experiment in this paper is to verify that the proposed algorithm can collect sub-images that meet the requirements of foveal imaging, and the method of obtaining the location of ROI does not need special attention. Therefore, we chose the simplest method, that is, manually select the ROI region in the experiment. In addition, since the FOV that deflected by prisms has the characteristic of decreasing resolution from center to periphery, we specially limited the selection of ROI within the camera’s original FOV, as shown by the blue rectangle in Fig. 16. According to the image information, the coordinate range of ROI-1 is x∈[−140, −40], y∈[10, 50]. The yellow star in Fig. 16 represents the center of the camera’s original FOV, which is also the origin of the image plane.

 figure: Fig. 16.

Fig. 16. Selected ROI.

Download Full Size | PDF

The sub-region where the ROI is located is determined by its central coordinates, and the Characteristic Algorithm is used to solve the nine sets of rotation angles of the Risley-prism for foveated imaging. Then, nine sub-images are collected according to the calculated rotation angles. The correction of imaging distortion of RPIS mainly consists of two steps: Image distortion caused by camera lens is corrected by camera calibration [26], then the image distortion caused by Risley-prism can be corrected by reverse ray-tracing method [27]. At present, the reverse ray-tracing method is the mainstream method to correct the imaging distortion of RPIS, and we have also studied the distortion correction of RPIS when prisms have large wedge angle [28]. Taking the 2nd sub-FOV as an example, the distortion correction process is shown in Fig. 17. Figure 17(a) represents the original image collected by the 2nd sub-FOV when ROI is the blue rectangle in Fig. 16. Figure 17(b) shows the image that has corrected the imaging distortion caused by camera lens, while Fig. 17(c) shows the image that further corrected the imaging distortion caused by Risley-prism.

 figure: Fig. 17.

Fig. 17. Distortion correction process. (a) Raw image of the 2nd sub-FOV. (b) Correct the image distortion caused by camera lens. (c) Correct the image distortion caused by Risley-prism.

Download Full Size | PDF

The corrected sub-images are shown in Fig. 18(a). It can be seen that all sub-images cover the selected ROI and contain information of different spatial ranges. Using the central region image as a reference, a panoramic image can be obtained by stitching all sub-images [29], as shown in Fig. 18(b). The blue rectangle indicates ROI-1, and the yellow star indicates the position of the origin of the image plane. It can be seen that the FOV angle of the stitched image is significantly enlarged (k=4.488), and the imaging FOV deviates from the origin of the image plane, making the ROI near the center of the stitched image. In addition, Fig. 18(b) also reflects the foveation effects due to both imaging distortion and image interpolation. Although the stitched image has uniform digital reconstruction sampling, the foveated variation in acuity is apparent as increased blurring in the peripheral regions. The stitched image in Fig. 18(b) already shows obvious foveal characteristics without super-resolution, and the foveal ratios can be further improved by super-resolution reconstruction. When there are nine sub-images, the theoretical foveal ratios of super-resolution can reach 3. However, due to the variable-resolution imaging characteristics caused by prisms, the actual foveal ratios always surpass the value of 3 [12].

 figure: Fig. 18.

Fig. 18. Results of image processing. (a) Corrected sub-images for ROI-1. (b) Stitched image for ROI-1. (c) Corrected sub-images for ROI-2. (d) Stitched image for ROI-2.

Download Full Size | PDF

To verify the flexibility of the fovea, we selected another ROI in the same scene, which is marked as ROI-2. The coordinate range of ROI-2 is x∈[150, 210], y∈[15,90], as indicated by the red rectangle in Fig. 16. Then, the Characteristic Algorithm was used to solve the rotation angles of the Risley-prism again, and the corrected sub-images and stitched image are shown in Fig. 18(c) and Fig. 18(d), respectively. In Fig. 18(c), all sub-images cover ROI-2 and contain information of different spatial areas, which can be used to obtain a foveated image. In Fig. 18(d), the stitched image has a large total FOV and shows obvious foveal characteristics. In addition, the stitched FOV shifts to the right relative to the origin of the image plane, making ROI-2 located near the center of the image. Figure 18 shows that the RPIS can still collect excellent fovea imaging materials after changing the position of the ROI, and the foveated images can be obtained through subsequent digital image processing, which reflects the flexibility of fovea adjustment of the foveated imaging method proposed in this paper.

A large number of experimental results show that the proposed method can use a single RPIS to achieve foveated imaging by capturing a limited number of sub-images, and the selection of fovea location is flexible.

6. Conclusions

Taking advantage of the powerful beam deflection capability of Risley-prisms, this paper proposed a foveated imaging method using a single RPIS that can achieve flexible ROI variation. The beam propagation model and imaging model of the RPIS are introduced, and the influence of the Risley-prism rotation angles on the characteristics of the deflected FOV is discussed. An evaluation method for foveated imaging by the RPIS is developed, and an algorithm for solving the rotation angles of the Risley-prism based on the characteristics of the deflected FOV is proposed. A fitness function was constructed according to the evaluation method for foveated imaging, and the rotation angles of the Risley-prism that satisfy foveated imaging are also solved using the PSO algorithm. Then, the reliability and robustness of the PSO algorithm and Characteristic Algorithm is proved by cross-validation, and the efficiency of the Characteristic Algorithm is further verified. The experimental results shows that the proposed method can achieve foveated imaging by collecting a set of original images, and the fovea can be adjusted flexibly. This method not only simplifies the hardware structure, but also effectively reduces the quantity of original image data, providing a promising method for achieving flexible foveated imaging.

Funding

Education and Scientific Research Foundation for Young Teachers in Fujian Province (JAT190005, JAT200040).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Q. Hao, Y. Tao, J. Cao, M. Tang, Y. Cheng, D. Zhou, Y. Ning, C. Bao, and H. Cui, “Retina-like Imaging and Its Applications: A Brief Review,” Appl. Sci. 11(15), 7058 (2021). [CrossRef]  

2. R. M. Narayanan, T. J. Kane, T. F. Rice, and M. J. Tauber, “Considerations and Framework for Foveated Imaging Systems,” Photonics 5(3), 18 (2018). [CrossRef]  

3. R. Etienne-Cummings, J. Van der Spiegel, P. Mueller, and M. Zhang, “A foveated silicon retina for two-dimensional tracking,” in IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing (IEEE, 2000), pp. 504–517.

4. G. Sandini, P. Questa, D. Scheffer, B. Diericks, and A. Mannucci, “A retina-like CMOS sensor and its applications,” in Proceedings of the 2000 IEEE Sensor Array and Multichannel Signal Processing Workshop. SAM 2000 (IEEE, 2000), pp. 514–519.

5. T. Martinez, D. V. Wick, and S. R. Restaino, “Foveated, wide field-of-view imaging system using a liquid crystal spatial light modulator,” Opt. Express 8(10), 555–560 (2001). [CrossRef]  

6. X. Du, J. Chang, Y. Zhang, X. Wang, B. Zhang, L. Gao, and L. Xiao, “Design of a dynamic dual-foveated imaging system,” Opt. Express 23(20), 26032 (2015). [CrossRef]  

7. S. Wang, X. Chen, Y. Yang, and M. Ye, “Foveated imaging using a liquid crystal lens,” Optik 193, 163041 (2019). [CrossRef]  

8. H. Hua and S. Liu, “Dual-sensor foveated imaging system,” Appl. Opt. 47(3), 317–327 (2008). [CrossRef]  

9. G. Carles, J. Babington, A. Wood, J. F. Ralph, and A. R. Harvey, “Superimposed multi-resolution imaging,” Opt. Express 25(26), 33043–33055 (2017). [CrossRef]  

10. G. Carles, G. Muyo, N. Bustin, A. Wood, and A. R. Harvey, “Compact multi-aperture imaging with high angular resolution,” J. Opt. Soc. Am. A 32(3), 411–419 (2015). [CrossRef]  

11. W. Chen, X. Zhang, X. Liu, and F. Fang, “Optical design and simulation of a compact multi-aperture camera based on a freeform microlens array,” Opt. Commun. 338, 300–306 (2015). [CrossRef]  

12. G. Carles, S. Chen, N. Bustin, J. Downing, D. McCall, A. Wood, and A. R. Harvey, “Multi-aperture foveated imaging,” Opt. Lett. 41(8), 1869–1872 (2016). [CrossRef]  

13. A. Li, X. Liu, W. Gong, W. Sun, and J. Sun, “Prelocation image stitching method based on flexible and precise boresight adjustment using Risley prisms,” J. Opt. Soc. Am. A 36(2), 305 (2019). [CrossRef]  

14. A. Li, Q. Li, Z. Deng, and Y. Zhang, “Risley-prism-based visual tracing method for robot guidance,” J. Opt. Soc. Am. A 37(4), 705 (2020). [CrossRef]  

15. A. Li, Z. Zhao, X. Liu, and Z. Deng, “Risley-prism-based tracking model for fast locating a target using imaging feedback,” Opt. Express 28(4), 5378–5392 (2020). [CrossRef]  

16. Q. Hao, Z. Wang, J. Cao, and F. Zhang, “A Hybrid Bionic Image Sensor Achieving FOV Extension and Foveated Imaging,” Sensors 18(4), 1042 (2018). [CrossRef]  

17. H. Cui, Q. Hao, J. Cao, Z. Wang, H. Zhang, and Y. Cheng, “Curved retina-like camera array imaging system with adjustable super-resolution fovea,” Appl. Opt. 60(6), 1535–1543 (2021). [CrossRef]  

18. Z. Wang, J. Cao, Q. Hao, F. Zhang, Y. Cheng, and X. Kong, “Super-resolution imaging and field of view extension using a single camera with Risley prisms,” Rev. Sci. Instrum. 90(3), 033701 (2019). [CrossRef]  

19. Y. Li, “Third-order theory of the Risley-prism-based beam steering system,” Appl. Opt. 50(5), 679–686 (2011). [CrossRef]  

20. C. T. Amirault and C. A. Dimarzio, “Precision pointing using a dual-wedge scanner,” Appl. Opt. 24(9), 1302–1308 (1985). [CrossRef]  

21. Y. Li, “Closed form analytical inverse solutions for Risley-prism based beam steering systems in different configurations,” Appl. Opt. 50(22), 4302–4309 (2011). [CrossRef]  

22. Y. Lu, Y. Zhou, M. Hei, and D. Fan, “Frame frequency prediction for Risley-prism-based imaging laser radar,” Appl. Opt. 53(16), 3556–3564 (2014). [CrossRef]  

23. Y. Lu, Y. Zhou, M. Hei, and D. Fan, “Theoretical and experimental determination of steering mechanism for Risley prism systems,” Appl. Opt. 52(7), 1389–1398 (2013). [CrossRef]  

24. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Mhs95 Sixth International Symposium on Micro Machine & Human Science (IEEE, 2002), pp. 39–43.

25. Y. Qi, Y. Shen, F. Huang, X. Wu, and J. Wu, “Method and evaluation of Enlarging Field of View based on Rotational Double Prisms,” Acta Opt. Sin. http://kns.cnki.net/kcms/detail/31.1252.O4.20210412.1340.018.html.

26. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” Seventh IEEE International Conference on Computer Vision IEEE (IEEE, 1999), pp. 666–673.

27. Y. Zhou, S. Fan, G. Liu, Y. Chen, and D. Fan, “Image distortions caused by rotational double prisms and their correction,” Acta Opt. Sin. 35, 143–150 (2015).

28. F. Huang, H. Ren, Y. Shen, and P. Wang, “Error analysis and optimization for Risley-prism imaging distortion correction,” Appl. Opt. 60(9), 2574–2582 (2021). [CrossRef]  

29. M. Brown and D. G. Lowe, “Automatic Panoramic Image Stitching using Invariant Features,” Int. J. Comput. Vis. 74(1), 59–73 (2007). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Typical RPIS.
Fig. 2.
Fig. 2. Schematic diagram of the beam propagation process in the RPIS.
Fig. 3.
Fig. 3. Adjusting imaging FOV by the Risley-prism: (a) Imaging model of the RPIS. (b) Deflected FOVs from different rotation angles.
Fig. 4.
Fig. 4. Effect of Risley-prism rotation angles on characteristics of the deflected FOV: (a) Only adjusting the relative angle Δθ. (b) Only adjusting the overall angle ψ.
Fig. 5.
Fig. 5. Essential theories of the Characteristic Algorithm: (a) Maximum total FOV that the RPIS can capture. (b) Sub-aperture layout of the analogous multi-aperture device. (c) Change in Area Expansion Ratio when adjusting a single FOV. (d) Initial orientation of the sub-FOVs. (e) Coverage of the total FOV under initial conditions. (f) Partition method of the image plane.
Fig. 6.
Fig. 6. Brief summary of the Characteristic Algorithm.
Fig. 7.
Fig. 7. Adjustment method for remote FOVs: (a) General situation. (b) When the ROI crosses the boundary of the camera’s original FOV.
Fig. 8.
Fig. 8. Adjustment strategy for transitional FOVs: (a) Initial state of the transitional FOV can cover the ROI. (b) Initial state cannot cover the ROI.
Fig. 9.
Fig. 9. Adjustment method for adjacent FOVs.
Fig. 10.
Fig. 10. Deflection FOV obtained by Characteristic Algorithm: (a) Remote FOVs. (b) Transitional FOVs. (c) Adjacent FOVs. (d) Total imaging FOV obtained by stitching the nine sub-FOVs.
Fig. 11.
Fig. 11. Calculation method for the range of single-deflection FOV. (a) FOV boundary obtained by curve fitting. (b) The left and right boundaries along the Y direction. (c) The integer points along the Y direction. (d) The upper and lower boundaries along the X direction. (e) The integer points along the X direction. (f) The integer points in the FOV.
Fig. 12.
Fig. 12. Iterative process of the PSO algorithm.
Fig. 13.
Fig. 13. Comparison of the Characteristic Algorithm and PSO algorithm under different ROIs: (a) ROI with center coordinates (100, 100) and side length 100 pixels. (b) ROI with center coordinates (250, 250) and side length 100 pixels.
Fig. 14.
Fig. 14. RPIS used in the experiment. (1) Risley-prism setup; (2) CCD camera; (3) longpass colored glass filter; (4) stepping motor; (5) computer.
Fig. 15.
Fig. 15. Summary of the proposed procedure for foveated imaging using a single RPIS.
Fig. 16.
Fig. 16. Selected ROI.
Fig. 17.
Fig. 17. Distortion correction process. (a) Raw image of the 2nd sub-FOV. (b) Correct the image distortion caused by camera lens. (c) Correct the image distortion caused by Risley-prism.
Fig. 18.
Fig. 18. Results of image processing. (a) Corrected sub-images for ROI-1. (b) Stitched image for ROI-1. (c) Corrected sub-images for ROI-2. (d) Stitched image for ROI-2.

Tables (2)

Tables Icon

Table 1. Variation relations of the deflected FOV when adjusting the prism rotation angles.a

Tables Icon

Table 2. Result of the Characteristic Algorithm.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

N 1 = [ cos ( θ 1 ) sin ( α ) , sin ( θ 1 ) sin ( α ) , cos ( α ) ] T ,
N 2 = [ 0 , 0 , 1 ] T ,
N 3 = [ 0 , 0 , 1 ] T ,
N 4 = [ cos ( θ 1 ) sin ( α ) , sin ( θ 1 ) sin ( α ) , cos ( α ) ] T .
A i = n i 1 n i [ A i 1 ( A i 1 N i ) N i ] + N i 1 n i 1 n i ( n i 1 n i ) 2 ( A i 1 N i ) 2 .
{ Φ  =  | arccos ( z 4 ) | Θ = arctan ( y 4 / x 4 ) .
{ x r = ( x 4 / z 4 ) L y r = ( y 4 / z 4 ) L .
{ θ 1  =  ψ θ 2  =  ψ Δ θ .
A i = A i x A i y .
A = A 1 A 2 A n ,
S = size ( A ) .
F ( ( θ i 1 , θ i 2 ) , r a n g e , p a r a ) = S .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.