Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Inter-module gap filling method for photon counting detectors based on dual acquisition

Open Access Open Access

Abstract

The use of photon counting detectors in X-ray imaging missions can effectively improve the signal-to-noise ratio and image resolution. However, the stitching of photon counting detector modules leads to large-size localized information loss in the acquired projected image, which seriously affects the regional observation. In this paper, we propose a method to fill the inter-module gap based on dual acquisition, referred to as the GFDA algorithm, which is divided into three main steps: (i) acquire the main projection by short-exposure scanning, and then scan again by vertically moving the carrier table to acquire the reference projection; (ii) use the alignment method to locate the projected region of interest; (iii) use image stitching and image fusion to recover the missing information. We analyzed the gray value of the region of interest of the Siemens star projection and the reconstructed conch slice data, and proved that the proposed method can recover the information more smoothly and perfectly. The GFDA algorithm is able to achieve a better image restoration effect without additional scanning time and better retain image details. In addition, the GFDA algorithm is scalable, which is demonstrated in the task of filling the stitching of multiple types of photonic technology detectors.

© 2024 Optica Publishing Group

Full Article  |  PDF Article
More Like This
PE-RASP: range image stitching of photon-efficient imaging through reconstruction, alignment, stitching integration network based on intensity image priors

Xu Yang, Shaojun Xiao, Hancui Zhang, Lu Xu, Long Wu, Jianlong Zhang, and Yong Zhang
Opt. Express 32(2) 2817-2838 (2024)

Free-space coupled, large-active-area superconducting microstrip single-photon detector for photon-counting time-of-flight imaging

Yu-Ze Wang, Wei-Jun Zhang, Xing-Yu Zhang, Guang-Zhao Xu, Jia-Min Xiong, Zhi-Gang Chen, Yi-Yu Hong, Xiao-Yu Liu, Pu-Sheng Yuan, Ling Wu, Zhen Wang, and Li-Xing You
Appl. Opt. 63(12) 3130-3137 (2024)

Reconstruction method suitable for fast CT imaging

Xueqin Sun, Yu Li, Yihong Li, Sukai Wang, Yingwei Qin, and Ping Chen
Opt. Express 32(10) 17072-17087 (2024)

Data availability

The data and the code used for the paper are available for researchers upon request from the corresponding author.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Geometric model of the PCD-CT system.
Fig. 2.
Fig. 2. Parameter calibration flow of the proposed method.
Fig. 3.
Fig. 3. Schematic diagram of a numerical simulation strategy.
Fig. 4.
Fig. 4. Alignment results obtained for different types of samples.
Fig. 5.
Fig. 5. Relationship between the area in which the pixel point is located and the weighting factor $w$.
Fig. 6.
Fig. 6. Siemens star projection results obtained by different weighted methods.
Fig. 7.
Fig. 7. Local results of Fig. 6 and the gray scale values of the locations marked with the red line.
Fig. 8.
Fig. 8. Slices of conch 3D data obtained by different image inpainting methods.
Fig. 9.
Fig. 9. Partial results of Fig. 8 and the gray scale values of the locations marked with the red line.
Fig. 10.
Fig. 10. Splice stitching results for different types of photon counting detectors.

Tables (3)

Tables Icon

Table 1. Digital-Analog Strategy to Obtain the Results of the Registration

Tables Icon

Table 2. Experimental Objects and Projected Imaging Parameters

Tables Icon

Table 3. Detector Specifications

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

[ x m a i n y m a i n ] = [ x r e f y r e f ] + [ d x d y ] ,
F ( m , n ) = w A ( m , n ) + ( 1 w ) B ( m , n ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.