Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Skeletonization and 3D rendering with real time terahertz tomography

Open Access Open Access

Abstract

Terahertz technology (spanning between 0.1 and 10 THz) is now a well-established tool to achieve contactless sensing and non-destructive testing (NDT). Among the advanced approaches, THz computed tomography (THz CT) is an emerging technique for 3D reconstruction and has been extensively investigated over the last decade. This work focuses on those capabilities for 3D volumetric reconstructions of complex objects through the use of a real-time THz imaging system operating at 2.5 THz. Further work demonstrates that the resulting data are compatible with automated processing for (i) an ad-hoc segmentation, extracting the sample from the background and reconstruction surrounding noise, (ii) a component labelling, and (iii) a skeletonization, providing crucial additional metadata about the sample morphology.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Terahertz Computed Tomography (THz CT) is an emerging imaging technique providing 3D visualization of an object, from sets of projections acquired through different viewing angles. The low absorption and large penetration depth of transmitted THz wave led to an adequate contrast allowing the reconstruction of volumes, providing inspection capabilities of the internal structure of transparent objects [15]. Unlike with other tomographic systems such as X-Rays radiations, along with its harmless nature, terahertz inspection offers a great suitability for a large variety of dielectrics materials (such as plastics, woods, paper, ceramics, …). On the algorithmic approach, to access the object physical properties, deepened data processing procedures have been explored and pushed forward for the minimization of some artefact, either numerical or induced by physical effect such as the Gaussian beam propagation and hence limited imaging depth of field [1], [6,7]. Nevertheless, on a practical note, THz-CT remains in most cases an extremely time-consuming process. Indeed, to reach exhaustive and relevant graphical depiction of the sample under inspection, classical single point sensing method require several hours of measurements for a single 3D tomographic reconstruction. This tedious process remains far from matching with the required frame-rate targeted for quality control or industrial non-destructive testing applications. To face this limitation, compact and uncooled terahertz matrix sensors based on several technological advances, have been developed over the last years [812] and integrated for real time imaging applications. In this letter, we will describe real time THz tomography from an algorithmic point of view, with the depiction of advanced data processing implementations for 3D rendering and deepened handling. The hardware implementation will be as well at the center of this work through the description of the real-time THz imaging system, applied to the inspection of a complex sample.

2. Method

2.1 Experimental setup

As the central piece for full field imaging, a last generation micro-bolometric camera has been integrated. Namely, the 240*320 pixels I2S TZCam was mounted with a f/0.8 × 0.22 40 mm focal with an antireflection coating objective, developed by Lytid, as to record the subsequent samples projection images. In a straightforward transmission configuration, such a recording scheme could simply be completed by an illumination source, and eventual beam-shaping elements. In this case, Lytid’s multichip Teracascade 2000 QCL (Quantum Cascade Laser) source has been used as to provide 5 mW output power illumination beam at 2.5 THz. In absence of additional beam handling elements, such a simplified illumination-sensor configuration would lead to drastic interference patterns induced by the high coherence of the illumination source. Such highly contrasted interferences patterns would eradicate any imaging and tomographic reconstruction capabilities. In order to overcome this impairing limitation, one solution consists of quickly modifying the coherent optical path, typically using wobbling mirrors to average those coherence artifacts over the integration time of the sensor. Another suited solution, implemented in this work and developed in previous works in real-time imaging [13], lies in the fast assessment of the position of a small size illumination beam, using galvanometric beam steering, to lead to homogeneous illumination of the sample over an imaging frame. This solution has already been investigated for sensors array characterizations [14], and has shown an improved beam homogenization. The full investigation of this solution, as a flexible imaging system in the terahertz range is given in [15] which details the improved versatility and adaptability especially suited for real-time imaging applications. The used fully integrated system is depicted in Fig. 1 and displays the mains elements with (4) the imaging unit camera, (3) the galvanometric beam homogenizer, (2) the auto-alignment unit for multi-spectral imaging, along with (1) the Teracascade 2000 QCL source.

 figure: Fig. 1.

Fig. 1. Experimental imaging setup with (1) a Teracascade QCL source, (2) an auto-alignment unit for multi-spectral imaging, (3) the galvanometeric homogenizer and (4) the imaging unit with the 240 × 320 I2S TZCAM mounted with the aspherical f/0.8 × 0.22 coated silicon lens, Teralens.

Download Full Size | PDF

Using such an implementation, large and variable object field imaging can easily be achieved as demonstrated in Fig. 2(a) where the sample is recorded through a 60 mm × 60 mm = 36 cm2 illumination area at 25 FPS (Frames Per Seconds). Putting to good use the illumination area tuning capability of such an approach, such wide and optimized illumination still allows for high SRN (Signal to Noise Ratio) of up to 50 dB thanks to the high illumination power and low NEP (Noise equivalent power) of the camera. In such a transmission configuration, however, further corrective image processing allows for improved visualization capabilities. Especially, a normalization through the recording of a stable background image, in absence of any samples, can then be performed and gives access to a relative transmission image of the sample and its denoising, perfectly suitable to be used as input datasets or tomographic reconstructions. For such tomography experiments, the sample is placed on a rotation stage with a continuous rotation movement as to take advantage of the full-field acquisitions speed.

 figure: Fig. 2.

Fig. 2. (a) Selection of normalized 2D projections, recorded in real-time, (b) related sinogram relative to a medial height recording plane, (c) photograph of the sample, a triple tree leaf (a pixel is 200 µm size)

Download Full Size | PDF

3. Results

Namely, an image, that would take several hours to be recorded using a standard raster scan approach, is instantaneously available through full-field imaging. More specifically, considering the 25 FPS frame rate and 240 × 320 array size, a recording rate of 1.92 × 106 pixel.s−1 is achieved and is to be compared with the few recorded sample point per second for raster scan approaches. Through this real-time recording improvement, a full sinogram recording, such as the one displayed in Fig. 2, is obtained under one minute, to be placed in perspective with the several days, required for raster-scan tomographic inspections. A rotation speed of 25°.s−1 allowed for a total 360° sinogram recording in under 15 second with a projection every degree, as a substantial input dataset for the tomographic reconstruction.

An example of this fast-recording process is given with the real time Visualization 1 joined to this article, where the triple leaf sample displayed in Fig. 2 (c) has been investigated at 2.5 THz.

From there, a regularized 3D implementation of the THz Ordered Subsets Convex (THz-OSC) algorithm is used. THz-OSC is an iterative process following several levels of iterations [16,17], specifically designed for data-sets integrating a low number of projections. The THz-OSC algorithm is a variation of the Ordered Subsets Convex (OSC) algorithm, which is used for the reconstruction of images in various fields, including medical imaging and astronomy. The THz-OSC algorithm incorporates specific characteristics of THz imaging, such as the low signal-to-noise ratio and the limited number of projections that can be obtained. The THz-OSC algorithm works by first dividing the set of projections into subsets and reconstructing an image for each subset using the OSC algorithm. The resulting images are then combined to form a final image using a convex optimization technique. This approach allows for faster reconstruction of images and improved image quality compared to other reconstruction methods. Overall, the THz-OSC algorithm is a powerful tool for reconstructing high-quality images from THz data and has potential applications in various fields, including biomedical imaging, security screening, and materials science.

Each sub-iteration updates the volume under reconstruction, from a subset of projections, until convergence reached by minimizing an error function. Hence, signal level through the sample solely relies on the absorption of the sample, then mainly linked to the material extinction coefficient following Beer-Lambert law. Different sample geometries could nevertheless require additional developments on this topic as to avoid unrealistic artifacts.

In fine, considering such a THz oriented tomographic reconstruction procedure, a tomogram can be expected to reliably feature the 3D structure of the sample under investigation. Specifically, the proportionality between the intensity of each reconstructed voxel with the overall attenuation encountered by the THz radiation at the corresponding position in the sample is to be retrieved. From the partial data-set displayed in Fig. 2, with a selection of 2D THz projections (Fig. 2(a)) and a central sinogram (Fig. 2(b)), Fig. 3 displays the final reconstructed tomogram. A clear reveal of the 3D the internal structures of the leaves is noticeable. In this part, we describe our subsequent advanced 3D data handling. Following such a 3D reconstruction based on a THz oriented OSC algorithms, further manipulations can be considered as to deepened the sample study. Namely, for such a sample, the branched architecture can be characterized as to provide structural, morphological and dimensional information about its internal content, through THz inspection.

 figure: Fig. 3.

Fig. 3. Reconstruction of the three leaves after the initial 3D rendering with the initial tomographic reconstruction.

Download Full Size | PDF

4. Deep data processing

A global algorithmic work flow is given in Fig. 4, depicting the initial 3D reconstruction steps and additional processing to reach such a depiction. Namely, it first features a 3D segmentation (allowing the separation of the object of interest from the background and noise) [1;18], followed by the labelling of each component (clustering and distinction of different ROIs (Regions Of Interest) in the object), to lay the ground for targeted processing such as the skeletonization in the case of this sample. These steps provide an asset to extensively explore the whole potential of THz wave tomographic inspection.

 figure: Fig. 4.

Fig. 4. Implemented data and image processing sequences

Download Full Size | PDF

As a first post-processing procedure, point-based segmentation using the K-mean algorithm has been implemented to remove background and noise from the overall tomogram [19], [20].

The K-mean approach is an efficient clustering method, widely used in image processing and valued for its computational efficiency. This iterative process partitions the voxels into sub-sets of neighboring values through the minimization of the within-cluster variance. For such data processing approaches, the optimal compromise between computation efficiency and result accuracy can be estimated. Typically, on such tomogram, the lower subsets integrate the low-intensity components containing the response of the air and background a surrounding noise. Then, in this case, the leaves are characterized by the few subsequent aggregations and, corresponding, respectively, to the lowest- and higher-absorptions parts of the object, followed by the high absorption cluster of the ramifications. Setting a large number of sub-sets to be defined could be useful in cases where the sample features a larger variety of materials attenuation. A resulting reconstruction, following this thresholder segmentation is featured in Fig. 5 (a) as to display the tubular structure of the vascular network of the leaf. Following this step, the sample is constituted of several unconnected components that can be subsequently aggregated through a connected-component labelling algorithm [21]. A connected-component is then a voxel sub-set of maximal size so that all of its constituting voxels are spatially connected. The resulting labeled cluster is a set of VOIs, merged as the object under investigation.

 figure: Fig. 5.

Fig. 5. (a) Tubular shape of the reconstructed 3D leaves following the segmentation clustering and rendering. (b) Skeleton of the vascular structure of the leaf extracted from a terahertz tomogram

Download Full Size | PDF

Following those standards procedures, more targeted investigations can be conducted such as the implementation of a skeletonization module for complex 3D structure. Skeletonization is the process performed on a VOI mask that aims at the construction of its morphological chart through the reduction of the dimension of the object under consideration for simplified data handling. Then, Skeletonization produces a compact representation of an object that reduces its dimensionality by one. Several computational approaches are available in the literature for extracting the skeleton of an object, some of which are widely different even at the level of their basic principles. A fundamental problem in shape skeletonization is the sensitivity of the skeletons to small perturbations of the input shape, which leads to the appearance of spurious branches. By assigning an importance metric to every skeleton point, one encodes the scale of the shape details. Subsequently, simplified skeletons can be obtained by simply thresholding at continuous values of that metric.

Obviously, such approached preserves the 3D topology, the shape and significant features for object recognition of classification, along with a positioning accuracy of the “skeleton” with respect to the object under invitation. Practically, for each branch of a whole VOI mask, a statistical barycenter point is defined, along with a vector tangent to the branch at this position. In fine, it produces one pixel/voxel width “skeleton”. More precisely, in this paper, a progressive thinning approach has been implemented to skeletonize segmented data and label branches and nodes [21]. It consists of 1) iteratively eroding the shape until the thinnest structure in obtained and 2) labelling each point as a branch or a node as to retrieve the structure’s hierarchy. Moreover, the skeletonization tree-like hierarchy allows one to perform a variety of measurements all along a given branch in order to assess internal dimension homogeneity or to detect bottlenecks inside tubular structures for instance. Such an approach was derived from pulmonary inspection techniques, revealing the arborescent structure of bronchiole.

For this specific sample the skeleton is made up of 613 branches (paths from the axil to an extremity) for a total of 2588 nodes. On its longest branch, it includes 63 segments and therefore 62 generations (number of bifurcations along a branch). A lateral dimensional analysis (in voxel size) can then be performed along this path, as reported Fig. 6, with the depiction of the inner diameter of the lumen, the inner section of the vessel, in blue, and the thickness of the vein, in red. Along this branch, the coherence of this extraction is ensured by the continuity of the respective diameters, with still a few discontinuities induced by localized lacks of contrast in the initial 3D reconstruction. Additionally, one can observe that the lumen (the inside space of a tubular structure here the interior wall) widens and narrows along the branches of the skeleton due to the presence of nodes and more longline vein sections. A larger uncertainty on the vein wall diameter is also observed at the end of the branch. This specific limitation is induced by the blurring and lower contrast obtained at the extremities of the three leaves on the 3D reconstruction, that can be seen on Fig. 2, which is directly induced by the limited depth of field of the THz imaging procedure.

 figure: Fig. 6.

Fig. 6. Results of the caliber shape analysis performed along the longest branch. In blue: average diameter of the lumen inside the tube. In red: average thickness of the branch leaf.

Download Full Size | PDF

Through this advanced procedure that ultimately led to the skeletonization of the sample, several dimensional statistics can be extracted and computed, for a deepened characterization of the entire 3D structure of the sample.

5. Conclusion

In summary, real time THz CT has been demonstrated relying on a 2.5 THz QCL sources and a microbolometer array camera, over a fully integrated imaging unit. From sets of 2D sample projections, 3D tomograms have been processed by OSC and the reconstructed data-sets have been considered as input for subsequent advance 3D data processing procedure. The 3-D visualization and surface or volume quantization have been performed and shown in a jointed Visualization 2, which renders the 3D reconstructed of a complex bended sample with an automated component labelling, followed by a skeletonization step. This advanced data processing provides additional meta-data about the sample morphology. Our work demonstrated that this approach is particularly well suited to applications requiring non-destructive testing or precise analysis workflow to achieve efficient and automated qualitative and quantitative measurements from full-field based 3D THz acquisitions.

Acknowledgement

The authors thank Lytid company for providing apparatus for the demonstration.

Disclosures

The authors declare no conflict of interest

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request

References

1. H. Balacey, B. Recur, J. B. Perraud, J. B. Sleiman, J. P. Guillet, and P. Mounaix, “Tomography and image processing for polymer additive manufacturing characterization,” IEEE Trans. Terahertz Sci. Technol. 6(2), 191–198 (2016). [CrossRef]  

2. J. P. Guillet, J. P. Guillet, B. Recur, L. Frederique, B. Bousquet, L. Canioni, I Manek-Hönninger, P. Desbarats, and P. Mounaix, “Review of terahertz tomography techniques,” J. Infrared, Millimeter, Terahertz Waves 35(4), 382–411 (2014). [CrossRef]  

3. B. Ferguson, S. H. Wang, D. Gray, D. Abbot, and X. C. Zhang, “T-ray computed tomography,” Opt. Lett. 27(15), 1312–1314 (2002). [CrossRef]  

4. D. M. Mittleman, S. Hunsche, L. Boivin, and M. C. Nuss, “T-ray tomography,” Opt Lett. 22(12), 904–906 (1997). [CrossRef]  

5. E. Castro-Camus, M. Koch, and D. M. Mittleman, “Recent advances in terahertz imaging: 1999 to 2021,” Appl. Phys. B 128(1), 12 (2022). [CrossRef]  

6. M. W. Ayech and D. Ziou, “Segmentation of terahertz imaging using k-means clustering based on ranked set sampling,” Expert Syst Appl 42(6), 2959–2974 (2015). [CrossRef]  

7. X. Yin, B. W. H. Ng, B. Ferguson, S. P. Mickan, and D. Abbott, “2-D wavelet segmentation in 3-D T-ray tomography,” IEEE Sens J 7(3–4), 342–343 (2007). [CrossRef]  

8. N. Oda, “Uncooled bolometer-type terahertz focal plane array and camera for real-time imaging,” Compt. Rendus Phys. 11(7–8), 496–509 (2010). [CrossRef]  

9. N. Kanda, K. Konishi, N. Nemoto, K. Midorikawa, and M. Kuwata-Gonokami, “Real-time broadband terahertz spectroscopic imaging by using a high-sensitivity terahertz camera,” Sci. Rep. 7(1), 42540 (2017). [CrossRef]  

10. B. N. Behnken, G. Karunasiri, D. R. Chamberlin, P. R. Robrish, and J. Faist, “Real-time imaging using a 28 THz quantum cascade laser and uncooled infrared microbolometer camera,” Opt. Lett. 33(5), 440–442 (2008). [CrossRef]  

11. C. am Weg, W. von Spiegel, R. Henneberger, R. Zimmermann, T. Loeffler, and H. G. Roskos, “Fast active THz cameras with ranging capabilities,” J. Infrared Millim. Terahertz Waves 30(12), 1281–1296 (2009). [CrossRef]  

12. R. al Hadi, H. Sherry, J. Grzyb, Y. Zhao, W. Förster, H. M. Keller, A. Cathelin, A. Kaiser, and U. R. Pfeiffer., “A 1 k-pixel video camera for 0.7–1.1 terahertz imaging applications in 65-nm CMOS,” IEEE J. Solid-State Circuits 47(12), 2999–3012 (2012). [CrossRef]  

13. A. Chopard, E. Tsiplakova, N. Balbekin, O. Smolyanskaya, J.-B. Perraud, J.-P. Guillet, N. V. Petrov, and P. Mounaix, “Single-scan multiplane phase retrieval with a radiation of terahertz quantum cascade laser,” Appl. Phys. B 128(3), 63 (2022). [CrossRef]  

14. J. B. Perraud, “Reconstructions rapides d'images en régime térahertz 3D,” PhD thesis, Bordeaux University, 2018.

15. J. B. Perraud, A. Chopard, J. Guillet, P. Gellie, and P. Mounaix, “A Versatile Illumination System for Real-Time Terahertz Imaging,” Sensors 20(14), 3993 (2020). [CrossRef]  

16. B. Recur, H. Balacey, J. Bou Sleiman, J. B. Perraud, J.-P. Guillet, A. Kingston, and P. Mounaix, “Ordered subsets convex algorithm for 3D terahertz transmission tomography.,” Opt. Express 22(19), 23299 (2014). [CrossRef]  

17. B. Recur, H. Balacey, and P. Mounaix, “Expectation maximisation algorithms for terahertz transmission tomography,” Proc. SPIE 9199, 91990O (2014). [CrossRef]  

18. M. W. Ayech and D. Ziou, “Terahertz image segmentation using k-means clustering based on weighted feature learning and random pixel sampling,” Neurocomputing 175(Part A), 243–264 (2016). [CrossRef]  

19. J. A. Hartigan and M. A. Wong, “Algorithm AS 136: a K-means clustering algorithm,” J. Royal Stat. Soc. C 28(1), 100–108 (1979). [CrossRef]  

20. J. MacQueen, “Some methods for classification and analysis of multivariate observations,” Proceedings of the fifth Berkeley symposium on mathematical statistics and probability 1(14), 281–297 (1967).

21. H. Samet and M. K. Tamminen, “Efficient component labeling of images of arbitrary dimension represented by linear bintrees,” IEEE Trans. Pattern Anal. Machine Intell. 10(4), 579–586 (1988). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       visualization 1
Visualization 2       visualization 2

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Experimental imaging setup with (1) a Teracascade QCL source, (2) an auto-alignment unit for multi-spectral imaging, (3) the galvanometeric homogenizer and (4) the imaging unit with the 240 × 320 I2S TZCAM mounted with the aspherical f/0.8 × 0.22 coated silicon lens, Teralens.
Fig. 2.
Fig. 2. (a) Selection of normalized 2D projections, recorded in real-time, (b) related sinogram relative to a medial height recording plane, (c) photograph of the sample, a triple tree leaf (a pixel is 200 µm size)
Fig. 3.
Fig. 3. Reconstruction of the three leaves after the initial 3D rendering with the initial tomographic reconstruction.
Fig. 4.
Fig. 4. Implemented data and image processing sequences
Fig. 5.
Fig. 5. (a) Tubular shape of the reconstructed 3D leaves following the segmentation clustering and rendering. (b) Skeleton of the vascular structure of the leaf extracted from a terahertz tomogram
Fig. 6.
Fig. 6. Results of the caliber shape analysis performed along the longest branch. In blue: average diameter of the lumen inside the tube. In red: average thickness of the branch leaf.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.