Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography

Open Access Open Access

Abstract

Distortion correction or camera calibration for an imaging system which is highly configurable and requires frequent disassembly for maintenance or replacement of parts needs a speedy method for recalibration. Here we present direct techniques for calculating distortion parameters of a non-linear model based on the correct determination of the center of distortion. These techniques are fast, very easy to implement, and accurate at sub-pixel level. The implementation at the X-ray tomography system of the I12 beamline, Diamond Light Source, which strictly requires sub-pixel accuracy, shows excellent performance in the calibration image and in the reconstructed images.

© 2015 Optical Society of America

1. Introduction

Parallel X-ray tomography is an imaging technique by which the internal 3D structure of a sample can be reconstructed from 2D projections. The projection images are formed by the penetration of parallel X-rays through the sample at a series of different angles in the range of [0; π]. Due to the parallelism of the penetrating x-rays, the obtained 2D projections can be separated into independent 1D-projection rows. The sequence of these rows throughout the angular projection range forms a sinogram, a 2D data array corresponding to each individual row. Applying a reconstruction method such as filtered back-projection (FBP) [1] on an individual sinogram yields a reconstructed 2D slice of the sample. Combining all slices creates the 3D image of the sample.

The recording of the 2D projections is often accomplished through a visible light sensor which is coupled to a X-ray sensitive scintillation crystal as in the design of the camera system (Fig. 1(a) ) at the beamline I12, Diamond Light Source [2]. The visible light emitted by the scintillator is imaged onto the sensor using visible light optics. The scintillator, the visible light optics, and the detector image plane are designed to be parallel; i.e. the optical axis is perpendicular to both the scintillator plane and the detector plane, thus avoiding any perspective distortion. This optics is optimised for resolution, efficiency and radiation hardness. As a trade-off, the optical design may produce geometrically distorted images. Typically, the transformation from the object to the image is non-linear, and due to the symmetry of the lens design the distortion has radial symmetry. If image magnification decreases with the radius, then it is known as ‘barrel’ distortion, and the converse is known as ‘pincushion’ distortion. The distortion observed in the camera system used here is of the barrel type, see Fig. 1(b).

 figure: Fig. 1

Fig. 1 (a) Schematic of the camera system at I12. X-rays enter an optics module (bottom left). A visible light image is generated in the scintillator and imaged via visible light optics onto a commercial sensor (top right). The visible-light path is folded twice; (b) Visible light from a dot pattern is imaged with the camera system shown in (a) in which barrel distortion is clearly visible. A rectangle indicates the active field of view of the tomographic acquisition reflecting the borders of X-ray illumination.

Download Full Size | PDF

As a consequence in a real camera system exhibiting distortions, the geometrical conditions for parallel-beam tomography are disturbed. In a well aligned tomography instrument, the axis of rotation is oriented perpendicular to the rows of the digital sensor. The virtual image of these rows in the plane of the detection scintillator will exhibit the distortions imposed by the optical system. Recording a tomographic data set (Fig. 2(a) ) through such system will have a detrimental impact on the 3D-image fidelity in three distinctly different ways. (In what follows we assume a vertically oriented rotation axis and horizontally aligned sensor rows).

 figure: Fig. 2

Fig. 2 (a) X-ray projection of a sample made of an assembly of spheres in which the center of rotation (CoR) is indicated by a vertical line, and the center of distortion (CoD) is indicated by a cross; (b) Part of of the reconstructed slice through the CoD (indicated by the white line numbered 1 in (a)) shows the distortion of spheres (arrows) increasing with their distance from CoD (indicated by a vertical line); (c) Sinogram at the top of a sphere, indicated by the white line numbered 2 in (a), shows the effect of vertical displacement (arrows).

Download Full Size | PDF

First, the row through the center of distortion (CoD), although not distorted to a curved line, will exhibit a continuously varying pixel size. As the result, the reconstructed image from this row shows the distortion artifacts which increase with distance from the CoD (Fig. 2(b)).

Second, as all other rows will exhibit curvature, they record sinograms which do not represent a distinct sample volume but will contain contributions from vertically adjacent volumes moving in and out of a single row’s field of view (Fig. 2(c)). In addition, the effect of the non-constant pixel size described above also applies.

Finally, the most obvious effect is the distortion of the individual 2D-projections. Neglecting the above mentioned artifacts, the tomographic reconstruction will still not render a truthful 3-dimensional representation of the object, due to the horizontal displacement of sample volumes away from their true position.

The methods of distortion correction or camera calibration have a long history of development [3] and are well established in computer vision and photogrammetry [4]. In camera calibration, a camera model is chosen and its characteristic parameters are calculated [5–8 ]. In some cases, where it is known that the optical installation will remain fixed for a long period, a lengthy, computationally intensive method may be suitable. However, an installation which is highly configurable, and which requires frequent disassembly for maintenance or replacement of parts, a suitable degree of distortion correction requires frequent recalibration. In such a case, a fast, accurate, and easy to implement method is desirable. Regarding the techniques used for calculating these parameters, calibration methods can be classified as [9,10 ]:

- Non-linear model and iterative techniques: A non-linear camera model including different types of lens distortion is used. The parameters are obtained by the iteratively optimizing the cost function. These approaches have high computational cost and require a good initial guess for convergence.

- Linear model and direct techniques: These techniques do not require a lens distortion model, and so are useful for arbitrary unknown sources of distortion. They use the least squares method for calculating a transformation matrix which convert known coordinates of an object to observed points in the image. They are fast but have low accuracy compared to other methods.

- Two-step methods: These methods combine direct techniques for calculating some initial parameters of a non-linear model and iterative techniques for the rest of parameters. These techniques are often used for high-accuracy camera calibration.

In most of these approaches, the location of the CoD is included as a parameter of the method which needs to be optimized as a part of the solution. In our work, the CoD is calculated independently from the distortion parameters. It is crucial to have an accurate location for the CoD. This point defines the coordinate system for the radial distortion model and any offset will result in errors increasing with radius. Furthermore, in the system under consideration (Fig. 1(a)) the routine mechanical alterations will not alter the lens characteristics, but may alter the centering of the optical axis with respect to the detector.

In this report, we present our approaches for correcting the radial distortion of the radiation-hard optics used at the X-ray tomography system of the beamline I12. We propose a method for accurately determining the CoD and direct techniques to calculate the distortion parameters of a non-linear model. Our techniques require a single image of a calibration target aligned perpendicular to the optic axis and thus imaged without perspective distortion. First, the CoD is calculated. Then, the parameters of the radial distortion model are determined by a geometric formulation. The resulting an algebraic system is solved by the least-squares method. The details of the calibration procedure are presented in section 2. Section 3 shows the results of applying our techniques on the calibration image and tomographic data.

2. Calibration procedure

2.1 Calibration target

The camera system at I12 for X-ray tomography (Fig. 1(a)) consists of a set of bespoke designed optics which form an image of a high-resolution scintillator onto the sensor of a commercial PCO Edge camera (2560 × 2160 pixels, sensor pixel size is 6.5 µm). In this work, we used optics with low optical resolution providing a spatial resolution of 18.6 µm at the scintillator. For calibration, the scintillator was replaced by a NIST-traceable square grid of dots, having a diameter of 0.25 mm and an equidistant spacing of 0.5 mm, and visible light was used instead of X-rays (Fig. 1(b)). Image analysis of the dot pattern provides a sub-pixel estimation of grid location using the center of mass (CoM) of each dot.

Although our method is aimed at using X-rays calibration samples, here we prefer to use a calibrated visible light target, for two reasons: Firstly, we provide a traceable method to the reader and the facility user to verify our method and its accuracy. Secondly, the specific X-ray energy in use at our facility is very high (53 keV – 150 keV) which makes the provision of an attenuation target with sufficient microscopic definition difficult and would therefore require an additional calibration step necessary to characterize target fabrication errors. The system of interchangeable holders for the scintillators and targets (component highlighted with magenta in Fig. 1(a)) is confocal and fixed magnetically to the front-end optics. Thus, they may be exchanged quickly without any other adjustments. Certainly, different types of targets (for visible light or X-rays) could be chosen for our method. The requirement is for straight and equidistant lines vertically and horizontally, and they need not be precisely perpendicular.

2.2 Calculation of the center of distortion

Firstly, a captured image of the dot pattern is segmented by the threshold method to obtain the binary image of the dots [11]. Then the coordinates of the CoM of every dot are calculated and recorded. The set of CoMs belonging to each row and column (Fig. 3 ) is analyzed separately as follows.

 figure: Fig. 3

Fig. 3 Coordinates of the CoM of the dots. Parabolas are fitted to the horizontal gridlines (i-index) and the vertical gridlines (j-index). Only a small number of fitted parabolas are shown.

Download Full Size | PDF

Points on each gridline are fitted to parabolas [12] in which the horizontal gridlines are represented by

y=aix2+bix+ci,
and vertical gridlines by

x=ajy2+bjy+cj.

The rough estimate of the CoD coordinate (xc, yc) is obtained by locating the parabolas between which the second order coefficient a changes sign. The average of the axis intercepts c of these two parabolas yields the desired estimate. Then the origin of the coordinate system is shifted to this new CoD and the parabolic fit is applied again to update the coefficients. This estimate must be within + - ½ the dot pitch of the grid.

For accurately calculating the CoD coordinates, we vary them inside the bounds of the above estimate to satisfy the following condition: The point having minimum distance to the current CoD is located for each parabola. The vertical parabolas yield one set of points which will be used to determine xc and the horizontal parabolas similarly yield yc. If each set of points lies on a straight line through the current CoD, it must be correct. Figure 4 illustrates the improvement of the accuracy of the calculated CoD after the above procedure is applied; the residuals of the linear fit are decreased and uniformly scattered after refining (Fig. 4(b)) compared to the residuals before (Fig. 4(a)).

 figure: Fig. 4

Fig. 4 Deviation from the straight line of the points having minimum distance to the CoD (horizontal gridlines shown only) in two cases: (a) with the initial-estimate of the CoD; (b) with the refined CoD.

Download Full Size | PDF

2.3 Calculation of the distortion parameters

Once the best CoD has been determined, the radial distortion parameters may be calculated. In this work, we use the polynomial model [5,6,8,13 ] for the radial distortion function. It is often easier to derive this function in a forward model, determining the equation of the mapping from the distorted image to the corrected (undistorted) image, than in a backward model, which uses the inverse mapping. Another advantage of the forward model is that it is straightforward to evaluate its accuracy. However, this model suffers from the problem of vacant pixels [6] resulting a high computational cost for correction. A backward mapping can easily deal with this problem by linear interpolation, so it is frequently used in practice. The equations for the backward mapping may be derived by inverting the forward mapping equations [6], but we also present a method for determining the equations directly, which is possible because of the accurate CoD determined above. In the following sections, we present our approach for calculating the parameters of each of these methods; forward (FW), backward-from-forward (BW-FW), and backward (BW) models; using the fact that the calibration grid has straight and equidistant lines of dots.

2.3.1 Forward model

In the FW model, the relationship between an undistorted image point (xu, yu) and a distorted image point (xd, yd) is described as [5]

Ff(rd)rurd=k0+k1r+dk2rd2+k3rd3+...knrdn
where ru and rd are the distances from the CoD in the undistorted image and in the distorted image, respectively; kn are the coefficients. The approach of calculating these coefficients is presented in the following. Here, we introduce subscripts and superscripts, d and u, referring to distorted and undistorted coordinates. The procedure is demonstrated for the horizontal gridlines, and it is applied similarly to the vertical gridlines.

As shown in section 2.2, in the distorted image a horizontal gridline indexed i is represented by

yd=aixd2+bixd+ci.
After correcting, this gridline should have a linear form of
yu=bixu+ciu
where ciu is the undistorted coefficient and bi is unchanged (as the consequence of the accurate CoD). From Eqs. (4) and (5) , and considering that the ratio of the radii in Eq. (3) pertains also to the ratio of each component (e.g: xu/xd = Ff(rd)), we obtain

Ff(rd)=ciuaixd2+ci.

The values of ai, ci, xd, and rd are known. The quantity ciu is the y-intercept of the undistorted gridline i. To determine this value, we assume that the undistorted, uniform grid spacing may be extrapolated from the area near the CoD having negligible distortion. The undistorted grid and hence each intercept ciu is constructed by extrapolating from a few lines around the CoD. Now we can calculate the distortion coefficients by solving a system of linear equations; one for each dot on both horizontal and vertical gridlines (Fig. 3)

[.....1rdrd2.....rdn.....1rdrd2.....rdn.....][k0k1k2kn]=[ciu/(aixd2+ci)cju/(ajyd2+cj)]
by least-squares method. Using the above procedure, we can easily change to different types of polynomial models such as the division model or the even-or-odd polynomial model [13] for applications to lenses with different degree of distortion.

In the correction process or image unwarping, points in the pixel-based coordinate system of the undistorted image (a blank pixel array) need to be filled in from the distorted image. However, Eq. (3) only allows mapping from the distorted image to the undistorted image, which means that many mapped points may not be placed at the pixel-based grid of the corrected image. This mapping causes the problem of the vacant pixels [6,13 ] as shown in Fig. 5(a) . Filling in vacant pixels is not straightforward and is computationally costly [14,15 ]. Here, we use the super-sampling technique [16] and nearest interpolation for correction (Fig. 5(c)) in which a pixel in the distorted image is divided to 5x5 pieces.

 figure: Fig. 5

Fig. 5 (a) Corrected image with vacant pixels (black lines). The area of negligible distortion around the CoD shows no missing pixels; (b) Magnified view of the areas indicated by the white square in (a); (c) Filled-in vacant pixels in (b) by super-sampling technique.

Download Full Size | PDF

2.3.2 Backward-from-forward model

The backward model takes the location of an image point in the undistorted image and calculates the corresponding image point of the distorted image by

Fb(ru)rdru=k0+k1r+uk2ru2+k3ru3+...knrun.
In general, this location does not always coincide with a pixel coordinate, so the pixel value is calculated by bilinear interpolation using four neighbouring pixels. This method is much faster than the technique used in the forward model. To determine the distortion coefficients, two alternative approaches are possible: Numerical least-squares inversion of the forward model equations, or least-squares fitting of the backward model equations.

In the first approach, a system of linear equations is formed for the inversion of the set of forward equations (Eq. (3)) as

[.....1rdFf(rd).....rdnFf(rd)n.....][k0k1kn]=[1/Ff(rd)]
which is solved by the least-square method.

2.3.3 Backward model

In the BW model approach we derive the equation for backward transform directly by combining Eqs. (4), (5), and (8) , following the procedure used for the forward model, to yield

Fiaixd2+ciciu=Fb(ru)
for the horizontal direction. Similarly, for the vertical direction we get
Fjajyd2+cjcju=Fb(ru),
where ciu and cju are calculated in the same way as in the forward model, and obtain the system of equations
[.....1rd/Fi.....rdn/Fin.....1rd/Fj.....rdn/Fjn.....][k0k1kn]=[FiFj]
which is again solved by the least-squares technique. These above models were implemented in Mathematica [11].

3. Result

3.1 Calibration target

For correcting the image of the calibration target, we apply all techniques described in section 2.3 including the FW model (Eq. (7)), the BW-FW model (Eq. (9)), and the BW model (Eq. (12)). We use distortion coefficients up to the fifth order, as there is no significant gain in accuracy with higher order [17].

In order to evaluate the quality of the correction method, we compare the corrected dot image with a distortion-free reference grid [18,19 ]. By calculating the displacement between the corrected dot locations and the reference grid, we produce a graph of the residual distortion in image coordinates. As it is impossible to image a real reference grid without distortion, our reference is a calculated image.

Firstly, the locations of the corrected dots are segmented as in section 2.2. An initial estimate of the reference grid is obtained from the central part of the corrected grid using dots within 5 spacings of the CoD. The coordinates of equidistant points on an orthogonal grid are calculated and registered with the corrected dot locations; applying translation, rotation and dilatation to minimize the total displacement. This forms the reference grid (see Fig. 6 ) which has the properties of the original standard grid and is aligned with the distortion axis in the same way as the original grid.

 figure: Fig. 6

Fig. 6 Magnified view of the part of the image having the highest distortion (bottom-left corner) with the overlay of the ideal gridlines: (a) Distorted image; (b) Corrected image from the BW model.

Download Full Size | PDF

Figure 7 shows the residual between the corrected points and their ideal position, plotted against both x and y positions, for each of the three models evaluated. As can be seen, all methods give excellent performance with sub-pixel accuracy. The fitting residual after aligning the model grid with the corrected dots has scattering on the scale of 1/3 of a pixel; the greatest difference for any individual dot from its reference point is 0.77 pixels, but the great majority are below 0.4 pixels from their reference. At this scale, artifacts due to segmentation of the dots are unavoidable and add scatter to the result. Other artifacts could be introduced by imperfect illumination, deviation from flatness of the mirrors or transmission windows in the optics module and could contribute to any apparent pattern in the residuals.

 figure: Fig. 7

Fig. 7 Residual of the corrected points against their ideal positions in x and y coordinates from different correction. (a, d) FW model; (b, e) BW-FW model; (c, f) BW model.

Download Full Size | PDF

3.2 Tomographic image

A sample consisting of high precision Titanium spheres (3 mm diameter) and Polypropylene spheres (3 mm diameter) placed in a Polytetrafluoroethylene tube was used for evaluating the distortion correction of tomographic data (Fig. 2(a)). Tomographic data sets of 2400 projections in the range of [0; 180°] were collected using monochromatic 53 keV X-rays, a CdWO4 scintillator and the camera system as described in section 2.1. The sample was positioned at 51 m from the source, and the sample-detector distance was 1000 mm to take advantage of edge enhancement from refraction.

Figure 8 shows the reconstructed slices [20] of a row passing through the CoD, which is labelled “1” in Fig. 2(a), before and after correction. In the distorted slice, in the region close to the CoD (Fig. 8(a), arrowed) the spheres show continuous boundaries. However, the farther away from the CoD, artifacts are clearly visible (Figs. 8(c) and 8(e)). These distinct artifacts are nicely removed after distortion correction (Figs. 8(b), 8(d), and 8(f)). The improvement on shape fidelity is evaluated by comparing the values for eccentricity of the sphere therein before (0.33) and after correction (0.094) (selected region No. 3 in Figs. 8(a) and 8(b)).

 figure: Fig. 8

Fig. 8 Reconstructed images from distorted data (a) and corrected data (b) with magnified views from areas indicated by the white frames: (c) Magnified view from frame 1 in (a); (d) Magnified view from frame 1 in (b); (e) Magnified view from frame 2 in (a); (f) Magnified view from frame 2 in (b). Note the poor reconstruction of features (arrowed) in case (c) in which the problem may not be immediately obvious to the eye. These features are clearly reconstructed in case (d).

Download Full Size | PDF

The reconstruction of a row labelled “2” in Fig. 2(a) illustrates the effect of vertical displacement (Fig. 9(a) ). After correction, the projections of the top of the sphere stay inside the field of view of a single row (Fig. 9(b)) resulting the recovery of the round shape in the reconstructed image (Fig. 9(c), arrowed).

 figure: Fig. 9

Fig. 9 (a) Part of the reconstructed image containing Ti-sphere (arrowed) and air-bubbles from the sinogram in Fig. 2(c); (b) Sinogram after correction; (c) Part of the reconstructed image from (b) where the shape of Ti-sphere is nicely recovered (arrowed) and the bubble no longer exhibits a distorted shape and bright arc artefacts.

Download Full Size | PDF

4. Conclusion

We proposed a method for highly accurate determination of the centre of distortion in an image suffering from radial distortion. This allowed us to apply algebraic methods for obtaining the distortion coefficients in a polynomial model. Equations suitable for either forward-mapping unwarping models or backward-mapping unwarping models were derived. The accurate location of the CoD enables to derive the direct backward equation.

The proposed techniques combine metric and non-metric approaches and use a calibration target for reference. They are categorized as direct techniques because no iterative adjustment of distortion coefficients is necessary. The BW and the BW-FW models are of practical use because they have low computational cost for image unwarping. All the proposed techniques yield sub-pixel accuracy with a residual distortion of below 0.5 pixels for almost the entire image of the calibration target.

Distortion correction with high accuracy leads to an improvement in synchrotron-based parallel-beam X-ray tomography where specially designed optics is being used for imaging X-rays after conversion to visible light. Here, distortion as small as 1 pixel introduces artifacts in the reconstructed images. The performance of the correction methods has been tested against the high-resolution X-ray tomographic data collected at beamline I12 at the Diamond Light Source, U.K. Distinct distortion artifacts are successfully removed.

The Mathematica code that we have used for this work is available for e-mail exchange.

Acknowledgments

This work was carried out with the support of the Diamond Light Source.

References and links

1. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (Institute for Electrical and Electronic Engineers, 1988).

2. M. Drakopoulos, T. Connolley, C. Reinhard, R. Atwood, O. Magdysyuk, N. Vo, M. Hart, L. Connor, B. Humphreys, G. Howell, S. Davies, T. Hill, G. Wilkin, U. Pedersen, A. Foster, N. De Maio, M. Basham, F. Yuan, and K. Wanelik, “I12: the Joint Engineering, Environment and Processing (JEEP) beamline at Diamond Light Source,” J. Synchrotron Radiat. 22(3), 828–838 (2015). [CrossRef]   [PubMed]  

3. T. A. Clarke and J. G. Fryer, “The development of camera calibration methods and models,” Photogramm. Rec. 16(91), 51–66 (1998). [CrossRef]  

4. F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogrammetry, Remote Sens. Spatial Inf. Sci. 36(5), 266–272 (2006).

5. S. Shah and J. K. Aggarwal, “A simple calibration procedure for fish-eye (high distortion) lens camera,” in Proc. 1994 IEEE Int. Conf. Robotics and Automation (1994), pp. 3422–3427. [CrossRef]  

6. K. V. Asari, S. Kumar, and D. Radhakrishnan, “A new approach for nonlinear distortion correction in endoscopic images based on least squares estimation,” IEEE Trans. Med. Imaging 18(4), 345–354 (1999). [CrossRef]   [PubMed]  

7. C. Hughes, P. Denny, E. Jones, and M. Glavin, “Accuracy of fish-eye lens models,” Appl. Opt. 49(17), 3338–3347 (2010). [CrossRef]   [PubMed]  

8. C. Ricolfe-Viala and A.-J. Sanchez-Salmeron, “Lens distortion models evaluation,” Appl. Opt. 49(30), 5914–5928 (2010). [CrossRef]   [PubMed]  

9. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]  

10. J. Salvi, X. Armanguk, and J. Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002). [CrossRef]  

11. Wolfram Research, Inc., Mathematica, Version 9.0, Champaign, IL, 2013.

12. D. G. Bailey, “A new approach to lens disortion correction,” in Proceedings of Image and Vision Computing New Zealand Conference, (2002), pp. 59–64.

13. C. Hughes, M. Glavin, E. Jones, and P. Denny, “Review of geometric distortion compensation in fish-eye cameras,” in IET Irish Signals and Systems Conference (2008), pp. 162–167. [CrossRef]  

14. C. Ishii, Y. Sudo, and H. Hashimoto, “An image conversion algorithm from fish eye image to perspective image for human eyes,” in Proc. of the IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics (2003), pp. 1009–1014. [CrossRef]  

15. J. C. A. Fernandes, M. J. O. Ferreira, J. A. B. C. Neves, and C. A. C. Couto, “Fast correction of lens distortion for image applications,” in Proc. of the IEEE Int. Symposium on Industrial Electronics (1997), pp. 708–712. [CrossRef]  

16. G. Woldberg, Digital Image Warping (Institute for Electrical and Electronic Engineers Computer Society, 1992).

17. H. Haneishi, Y. Yagihashi, and Y. Miyake, “A new method for distortion correction of electronic endoscope images,” IEEE Trans. Med. Imaging 14(3), 548–555 (1995). [CrossRef]   [PubMed]  

18. C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Robust metric calibration of non-linear camera lens distortion,” Pattern Recognit. 43(4), 1688–1699 (2010). [CrossRef]  

19. J. Park, S.-C. Byun, and B.-U. Lee, “Lens distortion correction using ideal image coordinates,” IEEE Trans. Consum. Electron. 55(3), 987–991 (2009). [CrossRef]  

20. N. T. Vo, M. Drakopoulos, R. C. Atwood, and C. Reinhard, “Reliable method for calculating the center of rotation in parallel-beam tomography,” Opt. Express 22(16), 19078–19086 (2014). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) Schematic of the camera system at I12. X-rays enter an optics module (bottom left). A visible light image is generated in the scintillator and imaged via visible light optics onto a commercial sensor (top right). The visible-light path is folded twice; (b) Visible light from a dot pattern is imaged with the camera system shown in (a) in which barrel distortion is clearly visible. A rectangle indicates the active field of view of the tomographic acquisition reflecting the borders of X-ray illumination.
Fig. 2
Fig. 2 (a) X-ray projection of a sample made of an assembly of spheres in which the center of rotation (CoR) is indicated by a vertical line, and the center of distortion (CoD) is indicated by a cross; (b) Part of of the reconstructed slice through the CoD (indicated by the white line numbered 1 in (a)) shows the distortion of spheres (arrows) increasing with their distance from CoD (indicated by a vertical line); (c) Sinogram at the top of a sphere, indicated by the white line numbered 2 in (a), shows the effect of vertical displacement (arrows).
Fig. 3
Fig. 3 Coordinates of the CoM of the dots. Parabolas are fitted to the horizontal gridlines (i-index) and the vertical gridlines (j-index). Only a small number of fitted parabolas are shown.
Fig. 4
Fig. 4 Deviation from the straight line of the points having minimum distance to the CoD (horizontal gridlines shown only) in two cases: (a) with the initial-estimate of the CoD; (b) with the refined CoD.
Fig. 5
Fig. 5 (a) Corrected image with vacant pixels (black lines). The area of negligible distortion around the CoD shows no missing pixels; (b) Magnified view of the areas indicated by the white square in (a); (c) Filled-in vacant pixels in (b) by super-sampling technique.
Fig. 6
Fig. 6 Magnified view of the part of the image having the highest distortion (bottom-left corner) with the overlay of the ideal gridlines: (a) Distorted image; (b) Corrected image from the BW model.
Fig. 7
Fig. 7 Residual of the corrected points against their ideal positions in x and y coordinates from different correction. (a, d) FW model; (b, e) BW-FW model; (c, f) BW model.
Fig. 8
Fig. 8 Reconstructed images from distorted data (a) and corrected data (b) with magnified views from areas indicated by the white frames: (c) Magnified view from frame 1 in (a); (d) Magnified view from frame 1 in (b); (e) Magnified view from frame 2 in (a); (f) Magnified view from frame 2 in (b). Note the poor reconstruction of features (arrowed) in case (c) in which the problem may not be immediately obvious to the eye. These features are clearly reconstructed in case (d).
Fig. 9
Fig. 9 (a) Part of the reconstructed image containing Ti-sphere (arrowed) and air-bubbles from the sinogram in Fig. 2(c); (b) Sinogram after correction; (c) Part of the reconstructed image from (b) where the shape of Ti-sphere is nicely recovered (arrowed) and the bubble no longer exhibits a distorted shape and bright arc artefacts.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

y = a i x 2 + b i x + c i ,
x = a j y 2 + b j y + c j .
F f ( r d ) r u r d = k 0 + k 1 r + d k 2 r d 2 + k 3 r d 3 + ... k n r d n
y d = a i x d 2 + b i x d + c i .
y u = b i x u + c i u
F f ( r d ) = c i u a i x d 2 + c i .
[ ..... 1 r d r d 2 ..... r d n ..... 1 r d r d 2 ..... r d n ..... ] [ k 0 k 1 k 2 k n ] = [ c i u / ( a i x d 2 + c i ) c j u / ( a j y d 2 + c j ) ]
F b ( r u ) r d r u = k 0 + k 1 r + u k 2 r u 2 + k 3 r u 3 + ... k n r u n .
[ ..... 1 r d F f ( r d ) ..... r d n F f ( r d ) n ..... ] [ k 0 k 1 k n ] = [ 1 / F f ( r d ) ]
F i a i x d 2 + c i c i u = F b ( r u )
F j a j y d 2 + c j c j u = F b ( r u ) ,
[ ..... 1 r d / F i ..... r d n / F i n ..... 1 r d / F j ..... r d n / F j n ..... ] [ k 0 k 1 k n ] = [ F i F j ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.