Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Design of compact off-axis freeform imaging systems based on optical-digital joint optimization

Open Access Open Access

Abstract

Using a freeform optical surface can effectively reduce the imaging system weight and volume while maintaining good performance and advanced system specifications. But it is still very difficult for traditional freeform surface design when ultra-small system volume or ultra-few elements are required. Considering the images generated by the system can be recovered by digital image processing, in this paper, we proposed a design method of compact and simplified off-axis freeform imaging systems using optical-digital joint design process, which fully integrates the design of a geometric freeform system and the image recovery neural network. This design method works for off-axis nonsymmetric system structure and multiple freeform surfaces with complicated surface expression. The overall design framework, ray tracing, image simulation and recovery, and loss function establishment are demonstrated. We use two design examples to show the feasibility and effect of the framework. One is a freeform three-mirror system with a much smaller volume than a traditional freeform three-mirror reference design. The other is a freeform two-mirror system whose element number is reduced compared with the three-mirror system. Ultra-compact and/or simplified freeform system structure as well as good output recovered images can be realized.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging systems are very important in advancing how humans explore the unknown world. Reducing the system volume and number of elements is one of the key goals of imaging optical design, in order to offer convenience of the users, reducing the size of the whole opto-electronic system, and reduce the complexity and difficulty for the applications such as remote sensing and telescope. There is a balance between system compactness and aberration correction during the optimization. In a long history, traditional spherical surfaces are used as the surface type in imaging optical system design due to its fabrication convenience, and the system is also rotationally symmetric. However, as there is only one design parameter (the radius or curvature) for one sphere, in order to achieve the required system performance, the system may have very complex structure or many elements. In order to improve the system specifications and the performance, while simplifying the system structure, in the past century, aspherical surfaces have been used in some imaging cameras, telescopes and microscopes. However, as both the spherical and aspherical surface are rotationally symmetric, it is hard to correct the aberrations in off-axis systems which are increasing used in recent decades.

Freeform optical surfaces, which are generally defined as non-rotationally symmetric surfaces, have much more design parameters for optical system design. The aberrations generated by freeform surfaces are consistent with the aberrations induced by nonsymmetric system structure. In this way, the use of freeform surfaces can reduce the system volume and number of elements in off-axis imaging systems. In recent years, freeform surfaces have been successfully used in unobscured reflective systems [16], head-mounted and head-up displays [712], imaging spectrometers [1315], miniature cameras [1618], etc. Although the use of freeform surfaces can well simplify the system structure, the ability of freeform surfaces is not unlimited. How to further reduce the system volume and element numbers of freeform imaging systems is a difficult problem for optical design.

In fact, the quality of the images output by the imaging systems can be further improved by digital imaging processing methods. Typical methods include Wiener filtering algorithm, Tikhonov filtering and Richardson-Lucy algorithm, etc. Helstrom et al. proposed the Wiener filtering algorithm on the base of inverse filtering, using statistical ideas to obtain the characteristics of the noise and the point spread function for aberration correction [19]. The Richardson-Lucy algorithm assumes that the noise follows a Poisson distribution, while transforming the problem into a Bayesian probability problem, which is solved by applying maximum likelihood estimation [20,21]. Traditional image recovery only focuses on the image processing, but do not adjust the imaging system itself. If geometric optical design and image recovery are combined and integrated, better solutions can be found which can lead to better image output. There are already some relevant studies on the joint design of imaging systems. Sitzmann et al. achieved extended depth-of-field and super-resolution imaging by jointly optimizing the optical system and solving the Tikhonov regularized least-squares problem [22]. Peng et al. realized imaging under white light illumination by joint design lightweight diffractive-refractive optics and image recovery algorithm by using Bayesian optimization [23]. With the booming development of deep learning in the field of computer vision in recent years, image recovery techniques based on neural network have gradually come into focus. For example, Peng et al. realized large field-of-view imaging by joint optimization of thin-plate optics and neural networks [24]. Sun et al. proposed differentiable lens and ray-tracing model, and achieved large field-of-view imaging and extended depth-of-field imaging using end-to-end design method [25]. For freeform optical system design, if deep neural network is introduced for image recovery, the advantages of both freeform optics and the image recovery deep neural network can be fully integrated and exploited. Compact freeform system design with simplified structure may be achieved to realize reduced system volume and number of elements, while maintaining good image performance after image recovery. The design framework for the compact and simplified off-axis nonsymmetric freeform imaging systems consisting of multiple freeform surfaces with complicated surface expression remained to be explored.

In this paper, we propose a design method of ultra-compact and simplified off-axis freeform imaging systems using optical-digital joint design process. The joint design combines the geometrical optical design of freeform imaging system and the image recovery using neural network. Freeform imaging systems with off-axis nonsymmetric structure and multiple freeform surfaces with complicated surface expression are considered. The overall design framework, ray tracing, image simulation and recovery, and loss function establishment are demonstrated in details. Using the proposed method, freeform system design with ultra-compact and simplified structure as well as good overall image quality can be realized. The feasibility and effect of the proposed design framework are demonstrated by two design examples. One is a freeform off-axis three-mirror system with a much smaller volume (69.69% smaller compared with an original freeform three-mirror system). The other is a freeform off-axis two-mirror system with reduced number of elements compared with the reference freeform three-mirror system. The proposed framework can be also extended to the joint design of other kinds of off-axis nonsymmetric imaging systems using other surface types or phase element such as holographic element and metasurface.

2. Design method

2.1 Overview of the design framework

The framework of the proposed optical-digital joint design of freeform imaging system is shown in Fig. 1. The framework consists of two main parts: forward pass and backward propagation process. In the forward pass, the point spread functions (PSF) of multiple field points among the full field-of-view (FOV) are calculated using the data obtained by ray tracing. Then the simulated images of the real scenes (ground truths) generated by the freeform imaging system are obtained. The simulated images are then recovered by the neural network (image recovery net). The differences between the recovered images and the real scenes are calculated (loss function L1), meanwhile the imaging performance and the constraints of the freeform optical system can be also established using ray tracing (loss function L2). In this way, the total loss function Ltotal can be established and calculated. In the backward propagation process, the partial derivative of the loss function with respect to each parameter in the freeform optical system and the network is calculated. Then using the partial derivatives (or gradient vector) the parameters in the freeform system and the neural network can be updated based on Ltotal calculated in the forward pass process, in order to improve the performance of the recovered images generated by the neural network and obtain a feasible freeform imaging system. The above process is repeated and the joint optimization of freeform imaging optical system and the neural network is accomplished. The goal of the design framework is to generate the feasible ultra-compact or simplified freeform imaging system and the corresponding image recovery neural network which can output images as similar as the real scenes.

 figure: Fig. 1.

Fig. 1. The design framework of compact freeform imaging system based on optical-digital joint optimization.

Download Full Size | PDF

2.2 Differentiable ray tracing

In order to simulate the imaging process of a freeform optical system, it is necessary to trace the rays among the full FOV from the object plane to the image plane. Ray tracing is a basic tool in many areas of computational imaging method or other freeform surface design method such as construction-iteration method. The basic process and mathematical theory of ray tracing used in this method is the same with the above areas. To allow joint optimization of the neural network and the freeform optical system, the ray tracing process must be differentiable so that partial derivatives (or gradient) calculations can be carried out to update the parameters by forward pass process. The automatic differentiation process or automatic computation of gradient can be easily done by some tools or libraries such as PyTorch. The position and direction of the rays and the position of the freeform surfaces can be defined in the same global three-dimensional Cartesian coordinate system. In this paper, we focus on the common case that the system is symmetric about the YOZ plane. The position of the freeform surface can be characterized by the global y coordinate and z coordinate of its vertices and its tilt angle α of the local x-axis relative to the global x direction. The ray tracing process involves the transformation between the global and the local coordinate system of the freeform surface, which we can distinguish them by the superscript {global} and {local}.

Common freeform surface types include XY polynomial freeform surfaces and Zernike polynomial freeform surfaces, etc, which are generally defined under a local coordinate system with their vertices as the origin of the local coordinate system. XY polynomials surface is the simplest polynomial freeform surface type and it matches the standard of CNC machine. Some design methods and examples using XY polynomial surfaces can be found in Refs. [2,3,710,12,16]. Zernike polynomials are both continuous and orthogonal over a unit circle, and they have the same types of wave aberrations often used in optical tests. Using Zernike polynomial surfaces matches the design of freeform systems based on nodal aberration theory (NAT). Some design methods and examples using Zernike polynomial surfaces can be found in Refs. [5,6,13,14]. In general, the freeform surface expression used for imaging optical system design is the combination of a base conic and freeform surface terms:

$$h(x,y) = \frac{{c({x^2} + {y^2})}}{{1 + \sqrt {1 - (1 + \kappa ){c^2}({x^2} + {y^2})} }} + \sum\limits_{i = 0}^q {{A_i}{g_i}(x,y)}, $$
where c is the curvature, κ is the conic constant, gi(x,y) represents a freeform term and Ai is the corresponding coefficient. The partial derivatives of h(x,y) with respect to x and y can be calculated easily.

The implicit expression of the freeform surface can be written as:

$$f(x,y,z) = h(x,y) - z. $$

Its gradient vector ∇f is:

$$\nabla f = (\frac{{\partial h(x,y)}}{{\partial x}},\frac{{\partial h(x,y)}}{{\partial y}}, - 1). $$

Differential ray tracing can be carried out using Newton’s method [25,26]. As shown in Fig. 2, for any ray in space, we define it as (p,d), where p = [px,py,pz] represents the coordinate of the start point of the ray and d = [dx,dy,dz] represents the normalized ray direction vector. The coordinates of a ray after propagating w units in space along direction d starting from p can be expressed as (p + wd). According to Eq. (2), the intersection of this ray with the surface f(x,y,z) satisfies:

$$f(x,y,z) = f(\boldsymbol{p} + w\boldsymbol{d}) = 0. $$
Then, for the ray (p,d), the calculation of its intersection with the surface can be transformed into finding the value of w. Using Newton's method, w can be obtained iteratively using the following equation:
$${w^{[n]}} = {w^{[n - 1]}} - \frac{{f(\boldsymbol{p} + {w^{[n - 1]}}\boldsymbol{d})}}{{f^{\prime}(\boldsymbol{p} + {w^{[n - 1]}}\boldsymbol{d})}} = {w^{[n - 1]}} - \frac{{f(\boldsymbol{p} + {w^{[n - 1]}}\boldsymbol{d})}}{{\nabla f \cdot \boldsymbol{d}}}, $$
where w[n] represents the value of w after the nth iteration. The initial guess of w can be calculated by dividing the distance between the start point of the ray and the vertex of the surface along the local z-direction of the surface by the z-component of the normalized ray direction vector, as shown in Fig. 2(a). Thus, the approximate value of w[0] can be obtained.

 figure: Fig. 2.

Fig. 2. The schematic plot of the ray tracing process. (a) The initial guess of w. (b) The intersection point and the outgoing ray (here a reflective surface is taken as an example).

Download Full Size | PDF

The calculation of w using Eq. (5) can be stopped until the change of w after repeated iterations is less than the maximum allowable value. At this time, the intersection point p'=p + wd where the ray intersects with the surface has been obtained, as shown in Fig. 2(b). Then the ray will be refracted or reflected by the surface. The outgoing direction vector d of this way can be calculated using the law of refraction or reflection, as the coordinates and normal vector at the intersection as well as the ray incident direction are all known. As the new ray (p,d) have been obtained, the intersection of this ray with the next surface can be calculated. The above process can be repeated until the image point is obtained. The ray tracing process is fully differentiable, which is the key for the parameters update during the system optimization process.

2.3 Image simulation and recovery

In order to realize the image recovery or image reconstruction using neural network, it is necessary to obtain the simulated images of the freeform imaging system corresponding to the real scenes based on the above ray tracing method. This can be done using the convolution of the PSF of the field points across the full FOV with the real scene. The PSF is the response of an optical system to a point source. In general, the PSF is space-variant across the FOV. The simulated image can be obtained by the convolution of each point in the real scene with its corresponding PSF and superimposing all convolution results. However, due to the memory limitation of the computer as well as the computational time, here we approximate that the PSF is space-invariant in a subarea of the full FOV (real scene). In addition, the diffraction effect of the system is ignored, as the aberration of the system will be much larger and the diffraction effect is not significant in the visible band, which we will focus on in the examples section. In order to obtain the PSF of a certain field point on the image plane, N rays of different pupil coordinates of this field point are traced from the object space to the image plane. The intensity distribution of the μth ray on the image plane can be considered to be a Gaussian distribution [27] (as shown in Fig. 3(a)). As a result, the intensity of one ray can be characterized by a two-dimensional matrix. For all the sampled rays of one field point, the intensity matrix of each ray can be calculated. Then, all the matrix can be superimposed to get the PSF of this field point, as shown in Fig. 3(b). In practice, the image point of the chief ray is taken as the center of the PSF matrix, and the size of the matrix is defined as K × K (each element of the matrix corresponds to a pixel on the image plane). The intensity of the μth ray at one pixel (m,n) (1 ≤ m,n ≤ K) on the image plane can be calculated by the following equation:

$$e_{m,n}^{\left\langle \mu \right\rangle } = \frac{1}{{\sqrt {2\pi } \sigma }}\exp ( - \frac{{r_{m,n}^2}}{{2{\sigma ^2}}}), $$
where rm,n is the distance between the intersection of the ray with the image plane and the pixel (m,n). $\sigma = \sqrt {\Delta {x^2} + \Delta {y^2}}$, where Δx and Δy represents the size of each pixel on the image plane in x and y directions respectively. The PSF can be calculated as:
$$\boldsymbol{PSF} = {\left[ {\sum\limits_{\mu = 1}^N {e_{m,n}^{\left\langle \mu \right\rangle }} } \right]_{K \times K}}(1 \le m,n \le K). $$

 figure: Fig. 3.

Fig. 3. The schematic plot of the PSF calculation process. (a) The intensity distribution of a single ray on image plane. (b) The intensity distribution of multiple rays from one field point on image plane. Note that the intensity in (a) is plotted exaggeratedly for clarity.

Download Full Size | PDF

The full FOV (real scene) as well as the image plane is divided into U × V sub-areas (segments), and the PSF of the field points in each sub-area is considered to be space-invariant approximately. In this way only the PSFs of U × V field points (the central field point among each sub-area is selected) need to be calculated by ray tracing or interpolation. In the image simulation process, each image sub-area IMGp,q (1 ≤ p ≤ U, 1 ≤ q ≤ V) is obtained by convolving the corresponding object sub-area OBJp,q (a segment of the real scene) with the corresponding PSF

$$\boldsymbol{IM}{\boldsymbol{G}_{p,q}} = \boldsymbol{OB}{\boldsymbol{J}_{p,q}} \ast \boldsymbol{PS}{\boldsymbol{F}_{p,q}}. $$
Then, the U × V image segments are stitched together to form the final simulated image of the real scene. As the distortion of the system can be controlled to be small, the effect of distortion can be neglected. It is worth noting that the size of the image may be different before and after convolution. In order to ensure that the final simulated image has the same size as the real scene, padding must be applied to each sub-area OBJp,q used for convolution to ensure that the final stitching result matches the size of the real scene. The above image simulation process was shown in Fig. 4. Artifacts can be a potential issue when stitching subareas. The PSF varies across the FOV, but we approximate it as space-invariant within each sub-area. Significant differences in PSFs between adjacent field points can indeed cause artifacts. To avoid false images, we apply padding to each sub-area to ensure that each sub-area boundary contains information from other adjacent sub-areas. Additionally, we obtaining PSFs for a sufficient number of the field points through interpolation, which can help to avoid artifacts. If our method is applied to a large FOV system, more field points should be sampled and traced. In addition, during image simulation, larger number of field points will improve the accuracy of the simulated image and benefit for the performance improvement, but it will significantly increase the computer memory cost and time cost. Therefore, the selection of the number of fields is a balance between the accuracy of the simulated images and memory/time cost. There is no specific minimum number of field points as it varies for different system design tasks, but the central field point and the field points in marginal areas of different directions should be sampled. In this way, the volume reduction task can be also conducted.

 figure: Fig. 4.

Fig. 4. The schematic plot of the image simulation process.

Download Full Size | PDF

After obtaining the simulated image of the imaging system, we use a deep neural network to recover the simulated image. The goal of the image recovery net is to obtain the image as similar as possible to the real scene. U-Net is widely used because it has good performance in image recovery and image segmentation tasks [28]. He et al. proposes the residual structure [29], which solves the degradation problem of deep neural networks so that it can learn the features of the dataset better. Res-UNet was proposed by Zhang et al. which combines the advantage of both U-Net and residual neural network [30]. The network consists of three parts: encoding, bridge and decoding. All of the parts are built with residual structure. In this paper, we modified the original Res-UNet by deepening both encoding part and decoding part to extract more features, and removing the residual structure from the decoding part to reduce the memory occupation of the network. The modified network is used for image recovery. The architecture of the network is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. The architecture of the image recovery net.

Download Full Size | PDF

Before the data is input to the network, the data is normalized using the mean and variance of the dataset's pixel intensity, which can improve neural network performance and stabilize the optimization process. A clamp layer is placed at the end of the network in order to ensure that the output pixel intensity range is [0, 1].

2.4 Loss function

Loss function (or merit function) is crucial for the network training process. The loss function used in the joint optical-digital optimization characterizes the similarity between the real scenes and the recovered images, as well as the image quality and the constraints of the freeform optical system. A freeform system satisfying the basic structure requirements and system specifications is used as the initial system for joint optimization. If a system far from the design requirement is taken as the initial system, the joint optimization may be much slower and maybe fails.

2.4.1 Loss related to the imaging performance of the system

In traditional imaging system design, the design goal is to get good imaging performance while satisfying the design requirement. In the optical-digital joint design process, the imaging performance of the system is not required to be as good as the traditional designs, as the images can be recovered by the image recovery net. However, the imaging performance should still be controlled to some extent. Otherwise, the simulated imaging method becomes ineffective and the image recovery cannot work normally. In our design framework, we use spot size to evaluate the imaging performance of the freeform system. For a field point j with N sampled rays with different pupil coordinates, the 100% spot size can be calculated by

$${\chi _j} = \max ({(2 \times {||{\boldsymbol{p}_\mu^{\{ \textrm{local}\} } - \boldsymbol{p}_1^{\{ \textrm{local}\} }} ||^2})_{1 \le \mu \le N}}), $$
where p{local} represents the local coordinates of the ray on the image plane, μ represents different pupil coordinates, the subscript 1 represents the chief ray of the field point. For totally M field points sampled in the optimization process, if the maximum 100% spot size among all field points exceeds the PSF grid, then a non-zero loss function will be added
$$P = \max ({({\chi _j})_{1 \le j \le M}}), $$
$${L_{\textrm{spot}}} = \left\{ \begin{array}{cll} P - K\Delta x,&\textrm{if}&P\, >\, K\Delta x\\0,& \textrm{if}&P \le K\Delta x \end{array}, \right.$$
where KΔx represents the size of the PSF grid. Here we assume the size of each pixel on the image plane is the same in x and y directions. If there are no requirements on the spot size during the optimization, other image quality metrics such as wavefront aberration, MTF and encircled or ensquared energy can be used (larger loss function corresponds to worse image quality).

2.4.2 Loss related to the difference of the images

The goal of our proposed optical-digital joint design method is to get the recovered images as similar as possible to the real scenes. Image evaluation function can be used to describe the similarity between two images. Peak Signal to Noise Ratio (PSNR) is a traditional loss function, and can only evaluate two images at the pixel level. In this paper, we choose Structural Similarity (SSIM) as the evaluation function. This function can evaluate the similarity of two images in terms of luminance, contrast, and structure by calculating the mean and variance of the pixels of the two images and the covariance between the pixels of the two images respectively. An ideal recovered image has an SSIM equal to 1. For totally T image pairs to be evaluated, the loss can be written as following

$${L_{\textrm{img}}} = 1 - \frac{{\sum\limits_{t = 1}^T {\textrm{SSIM(}\boldsymbol{OB}{\boldsymbol{J}_t}\textrm{,}\boldsymbol{IM}{\boldsymbol{G}_{\textrm{rec},t}}\textrm{)}} }}{T}, $$
where OBJ is the real scene and IMGrec is the recovered image. The simulated image is obtained using the convolution of the PSFs of the multiple field points across the full FOV with the real scene. Therefore, the image quality of all the sampled multiple field points is considered in the loss function related to the difference of the images. During image simulation and joint optimization, all the field points have the same weight. Besides the commonly used evaluate function PSNR and SSIM mentioned above, using some neural network feature layers’ output as perceptual loss function may achieve better results [31], but we do not use this method here due to memory limitation.

2.4.3 Loss about system constraints

During optical design, some design constraints should be added in order to meet some design requirements on system specifications, structure, volume, and aberrations. In our design framework, the constraints are also directly integrated into the loss function, which can be also considered as penalty function.

System specifications such as focal length should be controlled during design. For the off-axis freeform imaging system, traditional calculation method using surface radius, refractive index and thickness is no longer valid. Here, we use a method to calculate the focal length of the system using ray tracing data. For the case that the object locates at infinity, we trace the chief rays corresponding to a small field angle $\theta$ in x and y directions respectively relative to the central field, whose image heights in x and y directions relative to the image point of the central field are hx and hy respectively. The focal length in x and y directions can be calculated as following:

$${f_\textrm{x}} = \frac{{{h_\textrm{x}}}}{{\tan \theta }},{\kern 1cm} {f_\textrm{y}} = \frac{{{h_\textrm{y}}}}{{\tan \theta }}. $$

The loss of the focal length is the sum of the absolute difference values between the calculated focal lengths and the target value fx, target and fy, target

$${L_{\textrm{EFL}}} = |{{f_\textrm{x}} - {f_{\textrm{x, target}}}} |+ |{{f_\textrm{y}} - {f_{\textrm{y, target}}}} |. $$

Different from co-axis system design, avoiding light obstruction is essential in the design process of off-axis systems, especially for reflective systems. We avoid obstruction by controlling the distance from the intersection of the typical marginal ray with the surface to the marginal ray of the light beams. δ is used to denote the distance, and a negative value of δ indicates the presence of obstruction. δtarget denotes the required minimum clearance. The loss with respect to the obstruction (for one distance) can be written as:

$${L_{\textrm{dis}}} = \left\{ \begin{array}{cll} {\delta_{\textrm{target}}} - \delta, &\textrm{if} &\delta < {\delta_{\textrm{target}}}\\0 ,& \textrm{if} &\delta \ge {\delta_{\textrm{target}}} \end{array} \right.. $$

For an off-axis reflective system, multiple distances at different locations should be controlled to eliminate obstruction. If there are totally D distances that are needed to be controlled, the loss with respect to the obstruction can be written as:

$${L_{\textrm{obs}}} = \sum\limits_{g = 1}^D {{L_{\textrm{dis, }g}}}. $$

The distortion of the system should also be controlled. The ideal image height in x and y direction for different field points can be determined using the target focal length and field angle. The actual image height can be obtained using ray tracing. For example, the relative distortion in x direction of a field point whose field angle in x direction is α can be calculated by

$$\gamma = \frac{{{h_{\textrm{x, ideal}}}(\alpha ) - {h_\textrm{x}}(\alpha )}}{{{h_{\textrm{x, ideal}}}}} \times 100\%, $$
where hx, ideal and hx are the ideal and actual image height in x direction, respectively. Totally W relative distortion values in both the x- and y-directions for all sampled fields across the full FOV (excluding the 0° field angle) can be calculated. Here we control both the mean and maximum relative distortion. The loss about distortion can be expressed as
$${L_{\textrm{mean,dst}}} = \textrm{mean}({(|{{\gamma_k}} |)_{1 \le k \le W}}),{\kern 1pt} {\kern 1pt} {L_{\textrm{max,dst}}} = \textrm{max}({(|{{\gamma_k}} |)_{1 \le k \le W}}), $$
$${L_{\textrm{dst}}} = {L_{\textrm{mean,dst}}} + {w_{\textrm{max,dst}}}{L_{\textrm{max,dst}}}, $$
where wmax,dst is the weight for maximum relative distortion in all field points and both directions.

For freeform imaging system design, it is recommended that the light beams use the central area of the freeform surface, not the off-axis area. This can be controlled by constraining the coordinate of the point where the chief ray of the central field intersects with each freeform surface. For a system with B surfaces (including image plane), the local coordinate of the intersection on the bth surface is denoted as

${\boldsymbol{p}_1^{\{ \textrm{local}\} }(b)}$
, and the related loss can be written as
$${L_{\textrm{center}}} = \sum\limits_{b = 1}^B {{{||{\boldsymbol{p}_1^{\{ \textrm{local}\} }(b)} ||}^2}}. $$

For the compact system design which we focus in this work, the size or volume of the system should be controlled during optimization. The maximum allowable size of the system in x, y and z directions are denoted as Vx, max, Vy, max and Vz, max, respectively. The actual size of the system in x, y and z directions are denoted as Vx, Vy and Vz, respectively. Here, the actual size in each direction can be obtained by firstly determining possible edge points of the system in this direction and then find the biggest distance in this direction between them. The loss about the volume of the system can be written as:

$${L_{\textrm{vol}}} = \max (0,{V_\textrm{x}} - {V_{\textrm{x,max}}}) + \max (0,{V_\textrm{y}} - {V_{\textrm{y,max}}}) + \max (0,{V_\textrm{z}} - {V_{\textrm{z,max}}}). $$

The total loss during training and optimization is the weighted sum of the above individual losses.

$${L_1} = {w_{_{\textrm{img}}}}{L_{\textrm{img}}}, $$
$${L_\textrm{2}} = {w_{\textrm{spot}}}{L_{\textrm{spot}}} + {w_{\textrm{EFL}}}{L_{\textrm{EFL}}} + {w_{\textrm{obs}}}{L_{\textrm{obs}}} + {w_{\textrm{dst}}}{L_{\textrm{dst}}} + {w_{\textrm{center}}}{L_{\textrm{center}}} + {w_{\textrm{vol}}}{L_{\textrm{vol}}}. $$

The loss of each epoch Ltotal during the training process is shown below

$${L_{\textrm{total}}} = {L_1} + {L_2}. $$

2.5 Joint optimization

As the loss function Ltotal can be constructed, the joint optimization can be conducted by simultaneously training the neural network and optimizing the freeform system. Here, we use ξ to present the values of parameters that can be optimized in the freeform optical system (surface coefficients and surface locations), and ζ to present the values of parameters that can be optimized in the image recovery net (weights and biases). The joint optimization process can be considered as to find ξ and ζ which can minimize the loss function Ltotal

$$\mathop {\textrm{argmin}}\limits_{\boldsymbol{\xi },\boldsymbol{\zeta }} ({L_{\textrm{total}}}(\boldsymbol{\xi },\boldsymbol{\zeta })). $$

The framework can be realized using PyTorch. In forward pass of each epoch, Ltotal is calculated according to (ξ, ζ), and in the backward propagation process the partial derivative of (ξ, ζ) are calculated. The optimization algorithm may choose from SGD, RMSprop, Adagrad, and Adam, etc. The design process continues until the change of loss function between two adjacent epoch is smaller than a preset value or the maximum number of epochs is achieved.

3. Design examples

In this section, we present two design examples to show the feasibility and effect of the proposed method in reducing the system volume and the number of elements in the freeform imaging system.

3.1 Freeform system design achieving smaller volume

The first design example is a freeform off-axis three-mirror system with a small volume. The system specifications are given in Table 1. Firstly, we designed a traditional system not considering the image recovery. The system layout is shown in Figs. 6(a) and 6(d). The design of this initial large volume three-mirror system is done in optical design software CODE V and it starts from a system (threemrc.len) in the sample lens library of CODE V. This system is firstly scaled to the required focal length value. After successive optimization, the system parameters are changed into design requirements and the surface type is upgraded to XY polynomials freeform surface. During optimization, the aperture stop is moved to the place in front of the primary mirror. The focal lengths of the system in the x and y directions are calculated using the ABCD matrix method and then controlled. The relative distortion of the system is controlled in both the x and y directions using real ray tracing data. The distances shown in Fig. 6(b) have to be controlled to eliminate light obscuration or avoid surface interference. The chief ray of the central field is controlled to intersect at the center of each freeform surface (the vertex of the freeform surface). The error function type used in the optimization is the default transverse ray aberration type in CODE V. The surface type for the three mirrors is XY polynomial freeform surface up to the fourth order, as given in Eq. (26). As the system is symmetric about the YOZ plane, the odd terms of x were not considered.

$$\begin{array}{l} h(x,y) = \frac{{c({x^2} + {y^2})}}{{1 + \sqrt {1 - (1 + \kappa ){c^2}({x^2} + {y^2})} }} + {A_1}{x^2} + {A_2}{y^2} + \\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {A_3}{x^2}y + {A_4}{y^3} + {A_5}{x^4} + {A_6}{x^2}{y^2} + {A_7}{y^4} \end{array}. $$

The design goal of this system at this time is to minimize the aberrations and get good imaging performance. Strict structure constraints on system volume are not added in order to satisfy the basic imaging requirement. The volume of the system is 21.61 mm (X) × 52.52 mm (Y) × 36.68 mm (Z) = 41.63 mL. The modulation transfer function (MTF) at 25lps/mm is above 0.14. The maximum relative distortion is about 1.23% and the average relative distortion is about 0.46% (in this section the relative distortions in x or y direction are considered). If the system volume was further reduced, the imaging performance will be very bad, which does not satisfy the imaging requirement. Here we can use neural network to recover the image, and optical-digital joint optimization is applied. Therefore, it is possible to obtain a system with smaller volume while having good image quality. We firstly reduced the system volume in optical design software CODE V, allowing more aberrations while keeping the basic system folding geometry, controlling the distortion, and keeping the system specifications. The volume reduction process is realized by adding constraint to control the system size in x, y and z directions, which is similar with volume constraints depicted in Section 2.4. This is separate step before the joint optimization. If a system far from the design requirement is taken as the initial system for joint optimization, the joint optimization may be much slower and maybe fails. The layout of this system is shown in Figs. 6(b) and 6(e). The volume of this system after software optimization is 12.64 mL. The average 100% spot diameter of this system is 0.37 mm. The imaging performance of this system is bad and cannot be used for imaging directly. This system was taken as the initial system for the joint optimization process. There is no connect between the optical design software and the neural network during the joint optimization process. However, in order to improve the design efficiency, after the design of initial small volume three-mirror system using CODE V, the surface data including surface coefficients and locations are directly read by Python for further joint optimization instead of manual copy and paste, as both CODE V application programming interface (API) and Python support Microsoft Windows standard Component Object Model (COM) interface.

 figure: Fig. 6.

Fig. 6. The layout the freeform three-mirror system. (a) The system with large volume designed by optical design software. (b) The initial system with small volume designed by optical design software. (c) The system with small volume designed by joint optimization. The shaded models of the systems in (a)-(c) and given in (d)-(f) respectively.

Download Full Size | PDF

Tables Icon

Table 1. Specifications of the system

During joint optimization, six distances are controlled to eliminate light obstruction, as shown in Fig. 6(b) (red dashed lines). The current size of the initial small volume system in x, y and z directions were taken as the maximum size in each direction during design. 28 different field points across the half FOV (zero and positive field angles in x direction) were sampled (note that the system is symmetric about the YOZ plane), and 721 rays with different pupil coordinates were sampled across the full pupil for each field point. These rays were traced in the system and the results were used to calculated the spot diameter on the image plane and the PSFs of the sampled fields. The pixel size of the image plane is 20µm × 20µm, and an 81 × 81 pixel grids (the center locates at the image point of the chief ray) was used to characterize the PSF of one field. For this initial small volume system, the simulated PSFs across the full FOV are shown in Fig. 7(a). Simulated images can be generated using the PSFs of these 28 fields and the PSFs of other 141 field points (21 fields in –x directions can be obtained using the results of the PSFs of the fields in + x direction directly due to symmetry, and other 120 fields are calculated by interpolating). Other numbers of field points are also acceptable according to different actual situations. We chose 400 images as training dataset and 100 images as testing dataset from the public dataset DIV2K [32] and only the central area (512 × 512 pixels) of each image is used. The joint optimization was performed on a computer using Intel i9-12900 K CPU and Nvidia RTX3090Ti 24GB Memory GPU. We chose Adam as the optimizer. It is worth noting that the learning rates for various parameters of the optical system are not the same, as different parameters have different impacts on the loss function of the system. Specifically, in this design, the learning rates for the conic constant, surface vertex position, surface tilt with respect to the x-axis, and the parameters in the image recovery network were set to 1e-4, while the learning rate for surface curvature was set to 1e-5. The learning rate for higher-order surface terms was set to the order of magnitude of their initial value multiplied by 1e-2. After 80 epochs, the learning rates started to decay exponentially. The multiplicative factor of learning rate decay was set to 0.9. The batch size was set to 8.

 figure: Fig. 7.

Fig. 7. The simulated PSFs across the full FOV of the (a) Initial small volume system and (b) system after joint optimization. Note that these PSFs are the result of the geometric optical system and image recovery is not considered.

Download Full Size | PDF

A total of 100 epochs were used for the joint optimization process, which took about 21.5 hours. The best optimization result is given in Table 2, which shows the volume and the average SSIM of the testing dataset for the large volume system, the initial small volume system, and the system after joint optimization (considering image recovery). The volume of the system after joint optimization is 16.81 mm (X) × 42.34 mm (Y) × 17.73 mm (Z) = 12.62 mL. It can be seen that the volume of the final freeform system is reduced by 69.69% compared with the original large volume system, while the SSIM are similar. In addition, the SSIM of the final system after image recovery is much higher than the result of the initial small volume system. The PSNR value of these systems are also given in Table 2 for reference. The simulated PSFs of the system after joint optimization are shown in Fig. 7(b), which shows that the PSFs of different fields become similar after optimization compared with the initial system. The maximum relative distortion is about 1.27%, and the average relative distortion is about 0.38%. The layout of this system is shown in Fig. 6(c) and Fig. 6(f). Simulated images of the initial large volume system, initial small volume system and the system after optimization are shown in Fig. 8. It is worth noting that although the overall evaluation indicates that Fig. 8(a) is better, it is possible for some sub-areas in Fig. 8(c) to have higher SSIM than those in Fig. 8(a). The selected and zoomed-in areas in each image in Fig. 8 were chosen to illustrate the performance differences before and after the joint optimization as well as the real scene, particularly in areas with more complex features or details. These areas were not specifically chosen for any optical or neural network related significance, but they can provide a visual comparison and insights into the limitations and strengths of the proposed method in terms of image recovery ability and detail preservation. MTF is not added directly into the loss function during the joint optimization process. After the joint optimization, the MTF drops to zero at about 4lps/mm as the system volume is reduce significantly, but the quality of the recovered images is similar with the real scene, and is significantly improved compared with the simulated images generated by the initial small volume system. We further conducted an additional experiment that we trained an image recovery network for the initial small volume system while did not optimize the system (the parameters of the initial system were not changed). After training, the average SSIM of the testing dataset is 0.8005. In conclusion, the above design results and analysis show that the proposed design framework can effectively reduce the volume of the freeform imaging system. There is no specific performance degradation limit for image recovery for of the proposed method, as the limit differs for different system design tasks and different image training dataset. It also depends on the target of the image recovery performance.

 figure: Fig. 8.

Fig. 8. (a) Simulated images of large volume three-mirror system, (b) simulated images of initial small volume three-mirror system, (c) recovered images of the small volume three-mirror system after joint optimization, and (d) the real scene. The details are shown below the full-size images. The value of SSIM is calculated with respect to the real scene.

Download Full Size | PDF

Tables Icon

Table 2. Quantitative evaluation of averaged SSIM and PSNR on test dataset and the system volume for the freeform three-mirror system design

3.2 Freeform system design with reduced number of surfaces

The second example is to show the feasibility of the proposed method to reduce the number of elements in the freeform system. Here we designed a freeform off-axis two-mirror system whose system specifications (as shown in Table 1) are the same with the three-mirror given in Section 3.1. As there are only two freeform mirrors in the system, the complexity and the assembly difficulty of the system is much lower than the three-mirror system. Using the joint optimization process, it is possible to obtain a freeform two-mirror system which can output high-performance images (after image recovery) compatible with the large volume three-mirror system shown in Fig. 6(a).

We firstly designed an initial two-mirror system using CODE V. XY polynomial surface up to the 4th order was taken as the freeform surface type. The imaging performance of the system was bad while the system specification, distortion and light obstruction are controlled and strict structure constraints on system volume are not added. The layout of the system is shown in Fig. 9(a) and Fig. 9(c). The joint optimization process was similar with the process demonstrated in Section 3.1. Four distances are controlled to eliminate light obstruction, as shown in Fig. 9(a) (red dashed lines). For this initial two-mirror system, the simulated PSFs across the full FOV are shown in Fig. 10(a). A total of 100 epochs were used for the joint optimization process, which took about 16.3 hours. The best optimization result is given in Table 3, which shows the average SSIM and PSNR of the testing dataset for the initial large volume three-mirror system, the initial two-mirror system, and the two-mirror system after joint optimization. It can be seen that the SSIM of the final system after image recovery is similar with the three-mirror system, and is much higher than the result of the initial two-mirror system. The maximum relative distortion is about 1.30%, and the average relative distortion is about 0.51%. The layout of this system is shown in Fig. 9(b) and Fig. 9(d). The simulated PSFs of the system after joint optimization are shown in Fig. 10(b). Simulated images of the initial large volume three-mirror system, initial two-mirror system and the system after optimization are shown in Fig. 11. The quality of the recovered images is similar with the real scene, and is significantly improved compared with the simulated images generated by the initial system. We also further conducted an additional experiment that we trained an image recovery network for the initial two-mirror system while did not optimize the system (the parameters of the initial system are not changes). After training, the average SSIM of the testing dataset is 0.8822. In conclusion, the above design results and analysis show that the proposed design framework can effectively reduce the number of elements in the freeform imaging system. Table 4 shows the surface coefficients and surface sag of the initial large volume three-mirror freeform system, the small volume three-mirror system after joint optimization and the two-mirror system after joint optimization.

 figure: Fig. 9.

Fig. 9. The layout the freeform two-mirror system. (a) The system designed by optical design software. (b) The system designed by joint optimization. The shaded models of the systems in (a)-(b) and given in (c)-(d) respectively.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The simulated PSFs across the full FOV of the (a) Initial two-mirror system and (b) system after joint optimization. Note that these PSFs are the result of the geometric optical system and image recovery is not considered.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. (a) Simulated images of large volume three-mirror system, (b) simulated images of initial two-mirror system, (c) recovered images of the two-mirror system after joint optimization, and (d) the real scene. The details are shown below the full-size images. The value of SSIM is calculated with respect to the real scene. It is worth noting that although the overall evaluation indicates that Fig. 11(a) is better, it is possible for some sub-areas in Fig. 11(c) to have higher SSIM than those in Fig. 11(a).

Download Full Size | PDF

Tables Icon

Table 3. Quantitative evaluation of averaged SSIM and PSNR on test dataset and for the freeform two-mirror system design

Tables Icon

Table 4. Surface coefficients and surface sag of freeform systems

4. Conclusion and discussions

In this paper, we proposed a design method of ultra-compact and simplified freeform imaging systems using optical-digital joint optimization. The design framework, ray tracing, image simulation and recovery, and loss function establishment are demonstrated in details. By fully integrating the optical design of freeform imaging system and the training of the image recovery neural network, freeform system design with ultra-compact and simplified structure as well as good recovered image performance can be realized. Two design examples are given to show the effect of the proposed method. The first example is a freeform off-axis three-mirror system designed by the joint optimization whose volume is 69.69% smaller than a traditional freeform three-mirror system. The second example is a freeform off-axis two-mirror system whose number of elements is smaller than the three-mirror system but the quality of the generated images is similar. The proposed method can be used in the design and development of freeform imaging systems in the areas of remote sensing, commercial and industrial cameras, etc., significantly reducing the overall system size and complexity, and reducing the difficulty in system integration and assembly. Furthermore, the proposed method in this paper works for freeform surfaces. But it is also applicable for systems using other surface types such as spherical and aspherical surfaces as well as their combination with freeform surfaces. The difference exists in the surface expression and number of surface coefficients. As the degrees of design freedom may be different, the system specification and compactness may be different. The proposed framework can be also extended to the joint design of other kinds of off-axis nonsymmetric imaging systems using phase element such as holographic element and metasurface.

Currently, the proposed method works for reflective systems. Therefore, dispersion effect and chromatic aberration does not exist. In addition, the diffraction effect of the system is ignored, as the aberration of the system will be much larger and the diffraction effect is not significant in the visible band, which we focus on in the examples section. As a result, the method doesn’t consider wavelength value currently (a single wavelength PSF is calculated and it is adequate for the image simulation). For refractive systems, dispersion effect exists when a spectral band is used. The basic design and optimization process proposed in this paper does not change, but the dispersion effect and chromatic aberration should be considered. We can select discrete sampled wavelengths among the spectral band. For each wavelength, the ray tracing, PSF calculation and image simulation is done separately. The final simulated image is the integration of the simulated images of all wavelengths. The SSIM for the color image can be calculated by the weighted average of the SSIM value of each wavelength, which can be used to calculated the loss function for joint optimization. The calculation of spot size for each field should consider the rays of multiple wavelengths, and lateral chromatic aberration should be controlled by constraining the distances of the image points for chief rays of different wavelengths. In addition, if the wavelength is large or the geometric image quality is not too bad and comparable with the diffraction effect, the diffraction effect cannot be ignored. At this time, the PSF calculation and the image simulation process should consider the diffraction effect.

Besides the above discussions related to the wavelength and refractive system, the current design framework proposed in this paper still has some limitations. For example, the number of sampled field points used for image simulation may be limited by memory cost and time cost. Design algorithms and framework which can reduce these costs is the key for getting accurate design results efficiently. Currently, a freeform system satisfying the basic structure requirements and system specifications should be designed separately as the initial system for joint optimization. Distortion should be optimized to be very small for the system after joint design and is not considered in the image recovery process. Future work will explore the joint optimization framework which can overcome the above limitations.

Many image retrieval algorithms such as classical Wiener filtering operate in the frequency domain. In our method, spot diameter is controlled during the joint optimization. Spot diagram only contains geometric information and does not consider the frequency information. Due to the memory limitation of the computer and the computation time in the process of the PSF calculation and image simulation, in our method, the size of the PSF grid KΔx of each field point should not be too big. The size of the geometrical spot should be smaller (a little larger is also acceptable) than the PSF grid size in order to make the PSF calculation and image simulation correct. Therefore, the spot size is controlled during optimization. In our method, both the geometric freeform imaging system and the image recovery net are very important for getting the good image recovery effect and modulating the PSF. Controlling the spot size does not play a decisive role. In future research, we will also consider adding some frequency information criteria to the optimization process to make it more efficient and stable.

Funding

National Key Research and Development Program of China (2022YFB3603400); National Natural Science Foundation of China (62275019, U21A20140); Young Elite Scientist Sponsorship Program by CAST (2019QNRC001).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. P. Rolland, M. A. Davies, T. J. Suleski, C. Evans, A. Bauer, J. C. Lambropoulos, and K. Falaggis, “Freeform optics for imaging,” Optica 8(2), 161 (2021). [CrossRef]  

2. R. Tang, G. Jin, and J. Zhu, “Freeform off-axis optical system with multiple sets of performance integrations,” Opt. Lett. 44(13), 3362 (2019). [CrossRef]  

3. Q. Meng, H. Wang, W. Liang, Z. Yan, and B. Wang, “Design of off-axis three-mirror systems with ultrawide field of view based on an expansion process of surface freeform and field of view,” Appl. Opt. 58(3), 609 (2019). [CrossRef]  

4. L. Chen, Z. Gao, J. Ye, X. Cao, N. Xu, and Q. Yuan, “Construction method through multiple off-axis parabolic surfaces expansion and mixing to design an easy-aligned freeform spectrometer,” Opt. Express 27(18), 25994 (2019). [CrossRef]  

5. A. Bauer, E. M. Schiesser, and J. P. Rolland, “Starting geometry creation and design method for freeform optics,” Nat. Commun. 9(1), 1756 (2018). [CrossRef]  

6. Y. Dai, Y. Liu, F. Shen, C. Kuang, Z. Zheng, and R. Wu, “Calculation of aberration fields for freeform imaging systems using field-dependent footprints on local tangent planes,” Appl. Opt. 61(32), 9576 (2022). [CrossRef]  

7. L. Gu, D. Cheng, Y. Liu, J. Ni, T. Yang, and Y. Wang, “Design and fabrication of an off-axis four-mirror system for head-up displays,” Appl. Opt. 59(16), 4893 (2020). [CrossRef]  

8. S. Wei, Z. Fan, Z. Zhu, and D. Ma, “Design of a head-up display based on freeform reflective systems for automotive applications,” Appl. Opt. 58(7), 1675 (2019). [CrossRef]  

9. D. Cheng, Y. Wang, H. Hua, and J. Sasian, “Design of a wide-angle, lightweight head-mounted display using free-form optics tiling,” Opt. Lett. 36(11), 2098 (2011). [CrossRef]  

10. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism,” Appl. Opt. 48(14), 2655 (2009). [CrossRef]  

11. Z. Qin, S.-M. Lin, K.-T. Luo, C.-H. Chen, and Y.-P. Huang, “Dual-focal-plane augmented reality head-up display using a single picture generation unit and a single freeform mirror,” Appl. Opt. 58(20), 5366 (2019). [CrossRef]  

12. D. Cheng, Q. Wang, Y. Wang, and G. Jin, “Lightweight spatial-multiplexed dual focal-plane head-mounted display using two freeform prisms,” Chin. Opt. Lett. 11(3), 031201 (2013). [CrossRef]  

13. Y. Liu, A. Bauer, T. Viard, and J. P. Rolland, “Freeform hyperspectral imager design in a CubeSat format,” Opt. Express 29(22), 35915 (2021). [CrossRef]  

14. J. Reimers, A. Bauer, K. P. Thompson, and J. P. Rolland, “Freeform spectrometer enabling increased compactness,” Light: Sci. Appl. 6(7), e17026 (2017). [CrossRef]  

15. B. Zhang, Y. Tan, G. Jin, and J. Zhu, “Imaging spectrometer with single component of freeform concave grating,” Opt. Lett. 46(14), 3412 (2021). [CrossRef]  

16. Z. Zhuang, J. Parent, P. Roulet, and S. Thibault, “Freeform wide-angle camera lens enabling mitigable distortion,” Appl. Opt. 61(18), 5449 (2022). [CrossRef]  

17. C. Xu, W. Song, and Y. Wang, “Design of a miniature anamorphic lens with a freeform front group and an aspheric rear group,” Opt. Eng. 60(06), 1 (2021). [CrossRef]  

18. Y. Yan and J. Sasian, “Miniature Camera Lens Design with a Freeform Surface,” (2017).

19. J. Biemond, “Maximum likelihood image and blur identification: a unifying approach,” Opt. Eng. 29(5), 422 (1990). [CrossRef]  

20. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” The Astronomical Journal 79, 745 (1974). [CrossRef]  

21. W. H. Richardson, “Bayesian-Based Iterative Method of Image Restoration*,” J. Opt. Soc. Am. 62(1), 55 (1972). [CrossRef]  

22. V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37(4), 1–13 (2018). [CrossRef]  

23. Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, “Computational imaging using lightweight diffractive-refractive optics,” Opt. Express 23(24), 31393 (2015). [CrossRef]  

24. Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38(6), 1–14 (2019). [CrossRef]  

25. Q. Sun, C. Wang, Q. Fu, X. Dun, and W. Heidrich, “End-to-end complex lens design with differentiate ray tracing,” ACM Trans. Graph. 40(4), 1–13 (2021). [CrossRef]  

26. C. Wang, N. Chen, and W. Heidrich, “dO: A Differentiable Engine for Deep Lens Design of Computational Imaging Systems,” IEEE Trans. Comput. Imaging 8, 905–916 (2022). [CrossRef]  

27. Z. Li, Q. Hou, Z. Wang, F. Tan, J. Liu, and W. Zhang, “End-to-end learned single lens design using fast differentiable ray tracing,” Opt. Lett. 46(21), 5453 (2021). [CrossRef]  

28. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds., Lecture Notes in Computer Science (Springer International Publishing, 2015), 9351, pp. 234–241.

29. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

30. Z. Zhang, Q. Liu, and Y. Wang, “Road Extraction by Deep Residual U-Net,” IEEE Geosci. Remote Sensing Lett. 15(5), 749–753 (2018). [CrossRef]  

31. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” (2016).

32. E. Agustsson and R. Timofte, “NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study,” in Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2017), pp. 1122–1131.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The design framework of compact freeform imaging system based on optical-digital joint optimization.
Fig. 2.
Fig. 2. The schematic plot of the ray tracing process. (a) The initial guess of w. (b) The intersection point and the outgoing ray (here a reflective surface is taken as an example).
Fig. 3.
Fig. 3. The schematic plot of the PSF calculation process. (a) The intensity distribution of a single ray on image plane. (b) The intensity distribution of multiple rays from one field point on image plane. Note that the intensity in (a) is plotted exaggeratedly for clarity.
Fig. 4.
Fig. 4. The schematic plot of the image simulation process.
Fig. 5.
Fig. 5. The architecture of the image recovery net.
Fig. 6.
Fig. 6. The layout the freeform three-mirror system. (a) The system with large volume designed by optical design software. (b) The initial system with small volume designed by optical design software. (c) The system with small volume designed by joint optimization. The shaded models of the systems in (a)-(c) and given in (d)-(f) respectively.
Fig. 7.
Fig. 7. The simulated PSFs across the full FOV of the (a) Initial small volume system and (b) system after joint optimization. Note that these PSFs are the result of the geometric optical system and image recovery is not considered.
Fig. 8.
Fig. 8. (a) Simulated images of large volume three-mirror system, (b) simulated images of initial small volume three-mirror system, (c) recovered images of the small volume three-mirror system after joint optimization, and (d) the real scene. The details are shown below the full-size images. The value of SSIM is calculated with respect to the real scene.
Fig. 9.
Fig. 9. The layout the freeform two-mirror system. (a) The system designed by optical design software. (b) The system designed by joint optimization. The shaded models of the systems in (a)-(b) and given in (c)-(d) respectively.
Fig. 10.
Fig. 10. The simulated PSFs across the full FOV of the (a) Initial two-mirror system and (b) system after joint optimization. Note that these PSFs are the result of the geometric optical system and image recovery is not considered.
Fig. 11.
Fig. 11. (a) Simulated images of large volume three-mirror system, (b) simulated images of initial two-mirror system, (c) recovered images of the two-mirror system after joint optimization, and (d) the real scene. The details are shown below the full-size images. The value of SSIM is calculated with respect to the real scene. It is worth noting that although the overall evaluation indicates that Fig. 11(a) is better, it is possible for some sub-areas in Fig. 11(c) to have higher SSIM than those in Fig. 11(a).

Tables (4)

Tables Icon

Table 1. Specifications of the system

Tables Icon

Table 2. Quantitative evaluation of averaged SSIM and PSNR on test dataset and the system volume for the freeform three-mirror system design

Tables Icon

Table 3. Quantitative evaluation of averaged SSIM and PSNR on test dataset and for the freeform two-mirror system design

Tables Icon

Table 4. Surface coefficients and surface sag of freeform systems

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

h ( x , y ) = c ( x 2 + y 2 ) 1 + 1 ( 1 + κ ) c 2 ( x 2 + y 2 ) + i = 0 q A i g i ( x , y ) ,
f ( x , y , z ) = h ( x , y ) z .
f = ( h ( x , y ) x , h ( x , y ) y , 1 ) .
f ( x , y , z ) = f ( p + w d ) = 0.
w [ n ] = w [ n 1 ] f ( p + w [ n 1 ] d ) f ( p + w [ n 1 ] d ) = w [ n 1 ] f ( p + w [ n 1 ] d ) f d ,
e m , n μ = 1 2 π σ exp ( r m , n 2 2 σ 2 ) ,
P S F = [ μ = 1 N e m , n μ ] K × K ( 1 m , n K ) .
I M G p , q = O B J p , q P S F p , q .
χ j = max ( ( 2 × | | p μ { local } p 1 { local } | | 2 ) 1 μ N ) ,
P = max ( ( χ j ) 1 j M ) ,
L spot = { P K Δ x , if P > K Δ x 0 , if P K Δ x ,
L img = 1 t = 1 T SSIM( O B J t , I M G rec , t ) T ,
f x = h x tan θ , f y = h y tan θ .
L EFL = | f x f x, target | + | f y f y, target | .
L dis = { δ target δ , if δ < δ target 0 , if δ δ target .
L obs = g = 1 D L dis,  g .
γ = h x, ideal ( α ) h x ( α ) h x, ideal × 100 % ,
L mean,dst = mean ( ( | γ k | ) 1 k W ) , L max,dst = max ( ( | γ k | ) 1 k W ) ,
L dst = L mean,dst + w max,dst L max,dst ,
L center = b = 1 B | | p 1 { local } ( b ) | | 2 .
L vol = max ( 0 , V x V x,max ) + max ( 0 , V y V y,max ) + max ( 0 , V z V z,max ) .
L 1 = w img L img ,
L 2 = w spot L spot + w EFL L EFL + w obs L obs + w dst L dst + w center L center + w vol L vol .
L total = L 1 + L 2 .
argmin ξ , ζ ( L total ( ξ , ζ ) ) .
h ( x , y ) = c ( x 2 + y 2 ) 1 + 1 ( 1 + κ ) c 2 ( x 2 + y 2 ) + A 1 x 2 + A 2 y 2 + A 3 x 2 y + A 4 y 3 + A 5 x 4 + A 6 x 2 y 2 + A 7 y 4 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.