Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single photonic integrated circuit imaging system with a 2D lens array arrangement

Open Access Open Access

Abstract

The segmented planar imager is an advanced optical interferometric telescope with a photonic integrated circuit (PIC). It provides a significant reduction in size, weight, and power consumption as compared to traditional optical interferometry. In this article, we propose the combination of a single PIC with a two-dimensional (2D) lens array to achieve single-PIC imaging. Unlike previous designs which require a large number of PIC arrangements in different directions for imaging, a single-PIC imaging system requires only one PIC for 2D frequency domain sampling and imaging. In addition, the single-PIC imaging system can form a larger equivalent aperture through modularization. Since PIC can be mass-produced, the modularization ability of the single-PIC imaging system greatly shortens the production and development cycle of large-aperture telescopes.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Space-based optical remote sensors have the advantages of not being restricted by geographic location and being less affected by the atmosphere [1], which plays an important role in astronomical exploration, ground exploration and other fields. According to classical optical theory, the effective optical system aperture must be increased when the angular resolution of the optical system is improved.

However, with the increase in the optical system aperture, on the one hand, its weight, volume, power consumption and cost increased exponentially. On the other hand, complicated optical path designs and calibrations pose great challenges to traditional optical manufacturing and detection capabilities. Segmented planar imaging technology provides new approaches for space-based high-resolution imaging, which is based on the principle of optical interferometric imaging. Segmented planar imaging uses a photonic integrated circuit (PIC) to achieve a compact arrangement of interferometric arrays instead of traditional film, CCD or CMOS detection methods, breaks through the limitations of size, weight, and power (SWaP) conditions, and obtains higher image resolution imaging. It also adopts the form of a photonic integrated circuit for production and adjustment, which can shorten the development cycle to several months or even several weeks. Fig. 1 shows the Hubble Telescope (approximately $183m^3$ in volume, $11,110kg$ in weight, and $2,800W$ in power consumption) and a segmented planar imaging technology concept camera (approximately $5m^3$ in volume, approximately $500kg$ in weight, and power consumption of approximately $500w$) under the same aperture and resolution.

 figure: Fig. 1.

Fig. 1. Hubble Telescope and SPIDER camera comparison.

Download Full Size | PDF

This technology was first proposed by LM Advanced Technology Center and UC Davis in $2012$ and is called Segmented Planar Imaging Detector for EO Reconnaissance (SPIDER) [2,3]. SPIDER is constantly improving and has now developed three stages: theoretical verification (2010-2013), first-generation PIC (2013-2014) and second-generation PIC (2015-2020). The size of the second-generation PIC is $22\times 22 mm$ with a three-layer board design, which can sample $103$ spatial frequencies at a time [4].

In 2018, the Shanghai Institute of Technical Physics of CAS researched SPIDER and designed a wide-field detection and narrow-field tracking target system based on SPIDER [5]. In 2018, Xidian University researched the factors affecting u-v coverage, fringe visibility and system imaging quality, which provided a basis for system design and optimization [6]. In 2020, Fudan University proposed a hexagonal lens array structure, which can achieve a more compact lens arrangement and achieve different spatial frequency sampling by changing the combination of lenses [7]. In 2018, Xi’an Institute of Optics and Mechanics of CAS carried out a method based on compressive sensing (CS-CPCIT) [8]. In 2020, an update of CS-CPCIT (CS-CPCIT+) was proposed. The arrangement of photonic integrated circuits greatly simplified their structure [9].

In the previous design, a single PIC can only collect frequency domain information in one direction. Many PICs must be arranged in different directions to form an optical system. This article discusses a modular and integrated single-PIC imaging system design based on a two-dimensional lens array arrangement and compressive sensing technology [10,11]. This single-PIC imaging system can be used independently or constitute an equivalent large imaging system by multiple PIC imaging systems. This paper is divided into five parts. Section 2 introduces the basic principles of SPIDER, CS-CPCIT and CS-CPCIT+, as fundamental for understanding this paper. Section 3 is concerned with the implementation of a two-dimensional design and the modular integration of a two-dimensional design. Section 4. simulates and verifies the two-dimensional design, and Section 5 presents the discussion and summarizes.

2. Principle introduction

2.1 SPIDER

The basic principle of SPIDER is explained as follows. A pair of lenses composes an interference baseline. The mutual intensity between two points on the detection surface is obtained through interferometry to fill the spatial frequency spectrum of the target, and then the target image is reconstructed by inverse Fourier transform. This technology replaces traditional large optical lenses with a microlens array and replaces free propagation interference through on-chip interference. SPIDER can greatly reduce the SWaP of optical systems at the same image resolution situation compared to traditional optical systems.

SPIDER reconstructs the original signal $x\in R^{H\times 1}$ from the measured value $y$, which can be expressed as

$$y=EFx.$$

Here, $E\in R^{M'\times H}$ represents the binary matrix of spatial frequency sampling of the original signal by SPIDER, $F\in R^{H\times H}$ represents the discrete Fourier transform, and $y\in R^{M'\times 1}$ represents $M'$ measured values.

2.2 CS-CPCIT design

In 2018, the authors proposed an optimized arrangement of PICs based on compressive sensing, which can greatly simplify the PIC structure [8]. For comparison, the reconstruction model of CS-CPCIT is as follows:

$$y'=G\Gamma F'x.$$

Here, $G=[G_1,G_2,\ldots,G_M ]^T\in R^{M\times W}$ represents the sensor matrix composed of $M$ different control gate states, $\Gamma \in R^{W\times H}$ represents the binary matrix of the spatial frequency sampling of the original signal, $F'\in R^{H\times H}$ represents the discrete Fourier transform, and $y'\in R^{M\times 1}$ represents $M$ measured values. Fig. 2 shows the SPIDER and CS-CPCIT structures.

 figure: Fig. 2.

Fig. 2. The SPIDER and CS-CPCIT structures.

Download Full Size | PDF

2.3 CS-CPCIT+ design

Although CS-CPCIT simplifies the PIC structure and increases the number of baselines, its spatial frequency sampling range is not as good as the optimal arrangement of lenses from the perspective of lens arrangement. Considering that any lenses of CS-CPCIT can form a baseline with all lenses of the other group, we optimized the lens arrangement of CS-CPCIT. Additionally, the measurement matrices of the two sets of lenses of CS-CPCIT can be inequal. The conceptual diagram of the CS-CPCIT upgrade structure (CS-CPCIT+) is shown in Fig. 3 [9].

 figure: Fig. 3.

Fig. 3. CS-CPCIT+ design.

Download Full Size | PDF

Suppose there are $2N$ lenses, where $N$ lenses are defined as $S=[S_1,S_2,\ldots,S_N]^T\in R^{N\times 1}$, and the other $N$ lenses are defined as $R=[R_1,R_2,\ldots,R_N ]^T\in R^{(N\times 1}$. At this time, the incident light entering lenses $S$ and $R$ can be expressed as:

$$U_{S_p}=A_p e^{{-}j(\omega t+\varphi_{A_p})},p=1,\ldots,N,$$
$$U_{R_q}=B_q e^{{-}j(\omega t+\varphi_{B_q})},q=1,\ldots,N.$$

Here, $A_p$ and $B_q$ represent the amplitude of each incident light. Then the incident light enters the interferometer and interferes:

$$U_S=\sum_{p=1}^N U_{S_p} ,U_R=\sum_{q=1}^N U_{R_q}.$$

The final output in-phase signal ($f_I$) and quadrature phase signal ($f_Q$) can be expressed as

$$f_I =\sum_{p=1}^N \sum_{q=1}^N A_p B_q cos(\varphi_{A_p}-\varphi_{B_q}),$$
$$f_Q =\sum_{p=1}^N \sum_{q=1}^N A_p B_q sin(\varphi_{A_p}-\varphi_{B_q}).$$

The output signals $f_I$ and $f_Q$ are measured and calculated separately, so $A_p B_q cos(\varphi _{A_p}-\varphi _{B_q})$ and $A_p B_q sin(\varphi _{A_p}-\varphi _{B_q})$ will not affect each other during the measurement process. Here we take $f_Q$ as an example for the following analysis. For brevity, take $U_{S_p} U_{R_q}$ to represent $A_p B_q sin(\varphi _{A_p}-\varphi _{B_q})$. The one-time switch state of all control gates can be regarded as a row of the measurement matrix. The measurement matrix can be obtained by multiple switches of the control gates.

Assuming that the measurement matrices of the two sets of lenses are inequal, the measurement matrix for the S group is $\Phi _{S}$, and the measurement matrix for the R group is $\Phi _{R}$, then the measurement process can be described as:

$$y^{\prime\prime} = (\Phi_S U_S)\circ(\Phi_R U_R),$$
$$y^{\prime\prime}_i=\sum(\Phi^{T}_{S_i} \Phi_{R_i})\circ(U_S U_R^{T})=G'_i \Upsilon.$$

Here, $G'=[G'_1,G'_2,\ldots,G'_M ]^T\in R^{M\times W}$ denotes the sensing matrix composed of $M$ different control gate configurations of the $S$ and $R$ lens groups. Compared to CS-CPCIT, the only difference is that the sensing matrix changes from $G$ to $G'$. $G'$ still satisfies Eq. (2) and can be calculated according to CS-CPCIT.

When the number of lenses is $2N$, the number of baselines composed of SPIDER, CS-CPCIT and CS-CPCIT+ is $N$, $2N-1$ and $N\times N$, respectively. Moreover, the output data volume of CS-CPCIT+ is much more compact than that of SPIDER, so the spatial information transmission efficiency of CS-CPCIT+ will be higher.

Most of the traditional reconstruction algorithms of compressive sensing are based on prior knowledge. The most commonly used prior knowledge is that the original signal is sparse under some transformation. There are three main methods to solve the reconstruction problem under this sparse model. $\mathbf {(1)}$ Convex relaxation method, which transforms the NP-hard $l_0-norm$ solution problem into a convex optimization problem with the smallest $l_1-norm$, such as BP [12] and GPSR [13]. $\mathbf {(2)}$ The greedy matching pursuit method is based on the greedy optimization method, which select the most suitable atoms in each iteration of the greedy strategy and add them to the candidate set, such as MP, OMP [14] and CoSaMP [15]. $\mathbf {(3)}$ Bayesian method, which uses the prior probability distribution of the signal to transform the reconstruction problem into a probability solution problem, such as BCS [16], BCS-LP [17], etc. In addition, the rapid development of deep learning has led researchers to develop compressive sensing reconstruction algorithms using non-human-designed models, for example, SDA [18], DeepInverse [19], ReconNet [20], etc.

3. Two-dimensional arrangement

In the previous design, the lens arrangement is one-dimensional, so a single PIC can only collect spatial frequency sampling in the same orientation. In CS-CPCIT+ design, the PIC structure optimized by compressive sensing can greatly increase the number of baselines. The two-dimensional lens arrangement can be realized through the optical fiber bundle to transmit the image with the PIC structure in CS-CPCIT+.

3.1 Monolithic two-dimensional arrangement structure

Here we choose the T-type lens array arrangement, as shown in Fig. 4(a) and Fig. 6. T-type arrangement is one of the two-dimensional arrangement forms. There are other arrangements, such as rings and hexagons. In our research, the T-type arrangement is a better choice with several advantages:

  • (1) The frequency domain sampling of the T-type arrangement has no redundancy. Assuming that there are $M$ lenses in the $S$ group and $N$ lenses in the $R$ group of the T-type arrangement, $N\times M$ different spatial frequency can be sampled without redundant sampling.
  • (2) The frequency domain sampling of the T-type arrangement is regular and complete. Assuming that there are $M$ lenses in the $S$ group and $N$ lenses in the $R$ group and the spatial frequency sampling interval is $f$, it can be seen that the T-type arrangement fully samples the spatial frequencies within the rectangular range of $(2N-1)f\times Mf$. For example, when $N=M=65$ and $f=1$, the T-type arrangement fully samples the spatial frequencies within a rectangular range of $129\times 65$, as shown in Fig. 5.
  • (3) When performing modular integration to expand the aperture, the frequency domain splicing of the T-type arrangement is easier to achieve since the frequency domain sampling of the T-type arrangement is rectangular, as shown in Section 4.

 figure: Fig. 4.

Fig. 4. Comparison of PIC lens arrangements of 2D-CPCIT and SPIDER required for imaging.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. (a) Simulation of T-type lens array and (b) corresponding frequency domain sampling.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. 2D-CPCIT design and lens arrangement.

Download Full Size | PDF

The 2D lens arrangement structure is shown in Fig. 6. Generally, the lenses number in the $S$ group and $R$ group should be approximately equal, which makes the total light intensity of each group approximately equal to obtain the maximum fringe contrast.

Assuming that there are $M$ lenses in the $S$ group, $N$ lenses in the $R$ group and the minimum sampling intervals of group $S$ and group $R$ are $a$ and $b$, respectively. The longest baseline length is the diagonal length, that is,

$$B_{max}=\sqrt{[(M-1)\times a]^2+[(N-1)/2\times b]^2}.$$

The total number of baselines is

$$N_{total}=M\times N.$$

The spatial frequency sampling distribution is

$$B(i,j)=\sqrt{[(M(i)-1)*a]^2+[(N(j)-1)/2\times b]^2 }.$$

According to Eq. (8) $\sim$ Eq. (10), the corresponding frequency domain can be completely sampled.

Through the two-dimensional arrangement, an optical system only needs one PIC in theory. Multiple PICs were needed in the previous design, as shown in Fig. 4(b). The equivalent aperture can also be expanded through multiple PICs to address the balance between the number of calculations and processing capacity. The two-dimensional arrangement realizes integration and modularization and can be adjusted according to the resolution requirements.

According to interference theory, the relationship between fringe visibility and the intensity of two coherent lights is

$$\eta=2\sqrt{I_1 I_2}/(I_1+I_2).$$

Here, $I_1$ and $I_2$ are the light intensities of the two beams. The greater the difference in intensity between the two light beams is, the smaller the $\eta$. In particular, when the intensity of two light beams is equal, the fringe visibility $\eta =1$. To avoid affecting the interference result, we generally take $\eta >0.707$ [21]. At this time, $I_1/I_2 \approx 0.2$, that is, the lens ratio is $1:5$. Here, we take the interference fringe visibility $\eta >0.9$ at this time $I_1/I_2 \approx 0.4$, and the lens ratio is $2:5$. The interference result can be regarded as ideal under this situation.

3.2 Modularization of a two-dimensional lens arrangement

In theory, a two-dimensional lens arrangement of a single PIC can be used for low-resolution imaging. One the one hand, the number of lenses needs to be increased if higher-resolution imaging is required. On the other hand, the number of lenses cannot be increased without limitation because the processing capacity of a single PIC is restricted. Therefore, modularization can be considered. In this paper, a 2D lens arrangement structure can be modularized to obtain a larger equivalent aperture.

Assuming the required equivalent aperture is doubled, the number of lenses in the $S$ group and $R$ group of a single PIC must be doubled. Another method is modular partition sampling, which can realize frequency domain sampling and splicing through 4 PICs, while the processing capacity corresponding to a single PIC remains unchanged. Similarly, when the required equivalent aperture is expanded by $K$ times, the required number of PICs is $K^2$. The corresponding modules are shown in Fig. 7. Through this splicing structure, sampling in a specific frequency region or full-frequency sampling with an equivalent large aperture can be achieved.

 figure: Fig. 7.

Fig. 7. 2D lens arrangement modularization.

Download Full Size | PDF

Another two-dimensional arrangement is achieved by matching the lens array to multiple PICs when the light intensity is sufficient, as shown in Fig. 8. The form of the lens array remains unchanged, but the incident light enters the fiber bundle, splits the light and then enters the different PICs to achieve block processing. At this time, the light intensity of the input light must be large enough to still meet the light energy required for interference and detection after light splitting.

 figure: Fig. 8.

Fig. 8. Another two-dimensional arrangement with the lens arrangement unchanged.

Download Full Size | PDF

3.3 Comparison

Traditional SPIDER needs $2N$ PICs to realize $N\times N$ frequency domain sampling with $2N$ lenses arranged on each PIC. In other words, traditional SPIDER needs $4N^2$ lenses to realize $N\times N$ frequency domain sampling. In CS-CPCIT+ design, $2N$ PICs are also required to achieve two-dimensional $N\times N$ sampling. The number of lenses on each PIC is $\sqrt {2N}$, which is a total of $2N\sqrt {2N}$ lenses. However, in 2D-CPCIT design, 1 PIC and $2N$ lenses are needed to achieve two-dimensional $N\times N$ sampling.

"Aperture expansibility" means that 2D-CPCIT can be integrated to achieve an equivalent large aperture. As shown in Fig. 7, twice the equivalent aperture can be formed through Fig. 7(a)$\sim$(d). Therefore, the equivalent aperture can be expanded. Single PIC of SPIDER and CS-CPCIT+ samples spatial frequency in a single direction. SPIDER and CS-CPCIT+ can only sample the frequency domain information in different directions through multiple PICs, but the aperture is certain and cannot form a larger equivalent aperture.

Image quality mainly depends on the completeness of spatial frequency sampling. Under the same number of lenses, such as $2N$ lenses, SPIDER has only $N$ frequency domain samples. Although both CS-CPCIT+ and 2D-CPCIT can sample $N\times N$ spatial frequencies, CS-CPCIT+ samples in a single direction. Two-dimensional sampling of CS-CPCIT+ requires multiple PIC arrangements, such as a radial arrangement like SPIDER, which is bound to have redundant and sparse parts of frequency domain sampling. Therefore, the imaging quality of 2D-CPCIT will be better than that of CS-CPCIT+, because the two-dimensional lens arrangement of 2D-CPCIT makes the frequency domain sampling more complete. In terms of imaging time, SPIDER is real-time imaging. However, CS-CPCIT+ and 2D-CPCIT are based on the principle of compressive sensing, which requires multiple sampling, so they require longer imaging time. It can be said that CS-CPCIT+ and 2D-CPCIT trade time for more frequency domain sampling and a more compact photonic integrated circuit structure (Table 1).

Tables Icon

Table 1. Comparison of SPIDER, CS-CPCIT+ and 2D-CPCIT.

4. Experiment and simulation

The simulation is performed in this section. The T-type arrangement is adopted, as shown in Fig. 6. We simulated a single PIC to sample different frequency domain regions corresponding to different lens arrays, then we combined these different frequency domain regions to form a twice times equivalent large aperture. Both the $S$ group and the $R$ group are $65$ lenses, and the spatial frequency sampling interval is $f$. Then, the spatial frequency sampling range of the $S$ group is $-64f\sim 64f$, and the spatial frequency sampling range of the $R$ group is $-32f \sim 32f$. The sample size is $129f\times 65f$. The simulation targets adopted are a satellite and USAF1951 resolution chart with size $129\times 65$.

The CS reconstruction algorithm applied in the simulation is the widely used BP algorithm, which is implemented by the L1-magic code packet [22]. Regarding the CS algorithm, the input $s = 4,225$ and sparse parameter $Sp = s/4$. Since the control gate have only two states of on and off, the measurement matrix selects the Bernoulli matrix, which satisfies the restricted isometry property (RIP) condition [23]. The simulation result is evaluated based on both the structural similarity index (SSIM) and the peak signal-to-noise ratio (PSNR) [24].

Fig. 7(a) corresponds to the low-frequency sampling of Fig. 9(d)$\sim$Fig. 9(f). There are 33 lenses located at low frequency positions in the S and R direction, respectively. Fig. 7(b) corresponds to high frequency in the S direction and low-frequency sampling in the R direction of Fig. 10(a)$\sim$Fig. 10(c). There are 32 lenses located at high frequency positions in the S and 33 lenses located at low frequency positions in the R direction, respectively. Fig. 7(c) corresponds to low frequency in the S direction and high frequency in the R direction sampling of Fig. 10(d)$\sim$Fig. 10(f), Fig. 7(d) represents high-frequency sampling in the S direction and high-frequency sampling in the R direction of Fig. 11(a)$\sim$Fig. 11(c). Through the combination of Fig. 7(a)$\sim$(d), the full frequency domain sampling of Fig. 11(d) can be obtained, and the equivalent aperture is doubled.

 figure: Fig. 9.

Fig. 9. Simulation spectrogram and images, all spectrograms correspond to satellite image. (a) Original spectrogram. (b) Original satellite image. (c) Original USAF1951 resolution chart. (d)Frequency sampling (R: low frequency, S: low frequency) and (e) reconstructed satellite ($SSIM=0.56, PSNR=22.05$) and (f) USAF1951($SSIM=0.63, PSNR=16.65$).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Simulation spectrogram and images, all spectrograms correspond to satellite image. (a)Frequency sampling (R: low frequency, S: high frequency) and (b) reconstructed satellite ($SSIM=0.23, PSNR=14.16$) and (c) USAF1951($SSIM=0.22, PSNR=10.42$). (d)Frequency sampling (R: high frequency, S: low frequency) and (e) reconstructed satellite ($SSIM=0.25, PSNR=14.24$) and (f) USAF1951($SSIM=0.13, PSNR=9.94$).

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Simulation spectrogram and images, all spectrograms correspond to satellite image. (a)Frequency sampling (R: high frequency, S: high frequency) and (b) reconstructed satellite ($SSIM=0.08, PSNR=13.97$) and (c) USAF1951($SSIM=0.06, PSNR=9.78$). (d)Frequency sampling (R: full frequency, S: full frequency) and (e) reconstructed satellite ($SSIM=0.79, PSNR=27.99$) and (f) USAF1951($SSIM=0.83, PSNR=24.81$).

Download Full Size | PDF

As seen in the Fig. 9, imaging can be achieved by using a single PIC. Compared to Fig. 10 and Fig. 11, it can be seen that the information of the target is mainly concentrated in the low frequency part. The high-frequency part is some detailed information, and it is difficult to obtain high-quality imaging only by the high-frequency part. Another important function of the two-dimensional arrangement is the realization of modular assembly. As shown in the Fig. 9$\sim$Fig. 11, a doubled equivalent aperture can be achieved with 4 PICs. The four modules sampled different frequency domain positions, which is equivalent to a large aperture. It can be seen from Fig. 11(e) and Fig. 11(f) that the reconstructed image quality has been greatly improved. The PSNR of the satellite image has been improved from 22.05dB to 27.99dB, and the PSNR of the USAF1951 has been improved from 16.65dB to 24.81dB. Because PIC can be mass-produced, the equivalent large-aperture camera only needs to arrange the lenses, which greatly simplifies the development process and technology of the large-aperture camera.

5. Discussion and conclusion

SPIDER technology faces problems such as low spectral sampling density and difficulty in realizing high-density photonic integrated circuits. The 2D-CPCIT design realizes the leap from one-dimensional sampling to two-dimensional sampling for a single-chip PIC and achieves imaging through a single-chip PIC. With the number of $2N$ lenses, $N\times N$ spectrum coverage is achieved. It can be modularly spliced to achieve a larger equivalent aperture. Because PIC can be mass-produced, it greatly shortens the production and development cycle of large-aperture cameras. Modular splicing requires different arrangements of lens arrays, which greatly reduces the difficulty of production and debugging.

Funding

the Youth Innovation Promotion Association (No. 1188000111).

Acknowledgments

This work is supported by the Youth Innovation Promotion Association (No. 1188000111).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. Lillie, “Large deployable telescopes for future space observatories,” in UV/Optical/IR Space Telescopes: Innovative Technologies and Concepts II, vol. 5899 (International Society for Optics and Photonics, 2005), p. 58990D.

2. A. Duncan, R. Kendrick, S. Thurman, D. Wuchenich, R. P. Scott, S. Yoo, T. Su, R. Yu, C. Ogden, and R. Proiett, “Spider: Next generation chip scale imaging sensor,” in Advanced Maui Optical and Space Surveillance Technologies Conference, (2015).

3. A. Duncan, R. Kendrick, C. Ogden, D. Wuchenich, S. Thurman, T. Su, W. Lai, J. Chun, S. Li, G. Liu, and S. Yoo, “Spider: next generation chip scale imaging sensor update,” in Advanced Maui Optical and Space Surveillance Technologies Conference, (2016).

4. T. Su, R. P. Scott, C. Ogden, S. T. Thurman, R. L. Kendrick, A. Duncan, R. Yu, and S. Yoo, “Experimental demonstration of interferometric imaging using photonic integrated circuits,” Opt. Express 25(11), 12653–12665 (2017). [CrossRef]  

5. Q. Yu, D. Wu, F. Chen, and S. Sun, “Design of a wide-field target detection and tracking system using the segmented planar imaging detector for electro-optical reconnaissance,” Chin. Opt. Lett. 16(7), 071101 (2018). [CrossRef]  

6. W. P. Gao, X. R. Wang, L. Ma, Y. Yuan, and D. F. Guo, “Quantitative analysis of segmented planar imaging quality based on hierarchical multistage sampling lens array,” Opt. Express 27(6), 7955–7967 (2019). [CrossRef]  

7. C. Ding, X. Zhang, X. Liu, H. Meng, and M. Xu, “Structure design and image reconstruction of hexagonal-array photonics integrated interference imaging system,” IEEE Access 8, 139396–139403 (2020). [CrossRef]  

8. G. Liu, D.-S. Wen, and Z.-X. Song, “System design of an optical interferometer based on compressive sensing,” Mon. Not. R. Astron. Soc. 478(2), 2065–2073 (2018). [CrossRef]  

9. G. Liu, D. Wen, Z. Song, and T. Jiang, “System design of an optical interferometer based on compressive sensing: an update,” Opt. Express 28(13), 19349–19361 (2020). [CrossRef]  

10. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

11. J. McEwen and Y. Wiaux, “Compressed sensing for wide-field radio interferometric imaging,” Mon. Not. R. Astron. Soc. 413(2), 1318–1332 (2011). [CrossRef]  

12. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Rev. 43(1), 129–159 (2001). [CrossRef]  

13. M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]  

14. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007). [CrossRef]  

15. M. A. Davenport, D. Needell, and M. B. Wakin, “Signal space cosamp for sparse recovery with redundant dictionaries,” IEEE Trans. Inf. Theory 59(10), 6820–6829 (2013). [CrossRef]  

16. S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process. 56(6), 2346–2356 (2008). [CrossRef]  

17. S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian compressive sensing using laplace priors,” IEEE Trans. on Image Process. 19(1), 53–63 (2010). [CrossRef]  

18. A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in 2015 53rd annual allerton conference on communication, control, and computing (Allerton), (IEEE, 2015), pp. 1336–1343.

19. A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), (IEEE, 2017), pp. 2272–2276.

20. K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), pp. 449–458.

21. H. Chong, “Effects of light intensity on visibility of interference fringe,” Journal of Shantou University(Natural Science Edition) (2001).

22. E. Candes and J. Romberg, “l1-magic: Recovery of sparse signals via convex programming,” URL: www.acm.caltech.edu/l1magic/downloads/l1magic.pdf4, 14 (2005).

23. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]  

24. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Hubble Telescope and SPIDER camera comparison.
Fig. 2.
Fig. 2. The SPIDER and CS-CPCIT structures.
Fig. 3.
Fig. 3. CS-CPCIT+ design.
Fig. 4.
Fig. 4. Comparison of PIC lens arrangements of 2D-CPCIT and SPIDER required for imaging.
Fig. 5.
Fig. 5. (a) Simulation of T-type lens array and (b) corresponding frequency domain sampling.
Fig. 6.
Fig. 6. 2D-CPCIT design and lens arrangement.
Fig. 7.
Fig. 7. 2D lens arrangement modularization.
Fig. 8.
Fig. 8. Another two-dimensional arrangement with the lens arrangement unchanged.
Fig. 9.
Fig. 9. Simulation spectrogram and images, all spectrograms correspond to satellite image. (a) Original spectrogram. (b) Original satellite image. (c) Original USAF1951 resolution chart. (d)Frequency sampling (R: low frequency, S: low frequency) and (e) reconstructed satellite ($SSIM=0.56, PSNR=22.05$) and (f) USAF1951($SSIM=0.63, PSNR=16.65$).
Fig. 10.
Fig. 10. Simulation spectrogram and images, all spectrograms correspond to satellite image. (a)Frequency sampling (R: low frequency, S: high frequency) and (b) reconstructed satellite ($SSIM=0.23, PSNR=14.16$) and (c) USAF1951($SSIM=0.22, PSNR=10.42$). (d)Frequency sampling (R: high frequency, S: low frequency) and (e) reconstructed satellite ($SSIM=0.25, PSNR=14.24$) and (f) USAF1951($SSIM=0.13, PSNR=9.94$).
Fig. 11.
Fig. 11. Simulation spectrogram and images, all spectrograms correspond to satellite image. (a)Frequency sampling (R: high frequency, S: high frequency) and (b) reconstructed satellite ($SSIM=0.08, PSNR=13.97$) and (c) USAF1951($SSIM=0.06, PSNR=9.78$). (d)Frequency sampling (R: full frequency, S: full frequency) and (e) reconstructed satellite ($SSIM=0.79, PSNR=27.99$) and (f) USAF1951($SSIM=0.83, PSNR=24.81$).

Tables (1)

Tables Icon

Table 1. Comparison of SPIDER, CS-CPCIT+ and 2D-CPCIT.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

y = E F x .
y = G Γ F x .
U S p = A p e j ( ω t + φ A p ) , p = 1 , , N ,
U R q = B q e j ( ω t + φ B q ) , q = 1 , , N .
U S = p = 1 N U S p , U R = q = 1 N U R q .
f I = p = 1 N q = 1 N A p B q c o s ( φ A p φ B q ) ,
f Q = p = 1 N q = 1 N A p B q s i n ( φ A p φ B q ) .
y = ( Φ S U S ) ( Φ R U R ) ,
y i = ( Φ S i T Φ R i ) ( U S U R T ) = G i Υ .
B m a x = [ ( M 1 ) × a ] 2 + [ ( N 1 ) / 2 × b ] 2 .
N t o t a l = M × N .
B ( i , j ) = [ ( M ( i ) 1 ) a ] 2 + [ ( N ( j ) 1 ) / 2 × b ] 2 .
η = 2 I 1 I 2 / ( I 1 + I 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.