Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Electrothermal-MEMS-induced nonlinear distortion correction in photoacoustic laparoscopy

Open Access Open Access

Abstract

Micro-electro-mechanical systems (MEMS) scanner has significant advantages of miniature size, fast response and high stability, which is particularly applicable to photoacoustic laparoscopy (PAL). However, tilt angle-voltage curve of electrothermal MEMS shows a nonlinear character, which leads to inevitable nonlinear distortion in photoacoustic imaging. To overcome this problem, a nonlinear distortion correction was developed for the high-resolution forward-scanning electrothermal-MEMS-based PAL. The adaptive resampling method (ARM) was introduced to adaptively calibrate the projection of non-uniform scanning region to match the uniform scanning region. The correction performed low time complexity and high portability owing to the adaptive capacity of distortion decomposition in the reconstruction of physical models. Compared with the sample structure, phantom experiments demonstrated that the distortion was calibrated in all directions and the corrected image provided up to 96.82% high structural similarity in local subset. Furthermore, ARM was applied to imaging the abdominal cavity of rat and the vascular morphology was corrected in real-time display within a delay less than 2 seconds. All these results demonstrated that the nonlinear distortion correction possessed timely and effective correction in PAL, which suggested that it had the potential to employ to any other electrothermal-MEMS-based photoacoustic imaging systems for accurate and quantitative functional imaging.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Photoacoustic imaging (PAI) is a fast-developed imaging technique based on rich optical contrast, which provides various anatomical and functional information such as vascular structure, oxy- and deoxy-hemoglobin concentration, total hemoglobin concentration, and blood flow speed, etc. [16]. Photoacoustic endoscopy (PAE), as a branch of PAI, which embodies PAI in miniature probes, has been used in the structure and functional imaging applications of various cavities, such as intestines, prostate and esophagus, etc. [710]. It can make up the shortcomings of traditional optical endoscopic imaging that barely obtains the information of deep tissue, which has been applied to early gastrointestinal tumor recognition, alimentary tract inflammation detection, vascular feature imaging of Crohn's disease, and three-dimensional imaging of lipid core in intravascular plaques, etc. [1115]. To extend the application in surgery, photoacoustic laparoscopy (PAL) imaging system is developed in this paper. On the one hand, optical laparoscopy, as a minimally invasive intervention, has played a prominent role in surgical monitor and prognosis evaluation of various clinical practice, for example, liver and uterine tumor resection, bile and kidney stone surgery, gastric perforation treatment, duodenal resection, etc. [1618]. On the other hand, PAE has provided a certain depth of microvascular network imaging for tissues and organs [1920]. Therefore, as the combination of optical imaging and PAI in miniature probe, PAL would expand the clinical application of PAE and provide new technical means for diagnosis and treatment of diseases [2123].

For the PAL probe, it is necessary to realize fast forward-scanning and miniaturized size simultaneously. In previous research, a fiber bundle with a fiber-tip Fabry-Perot ultrasonic sensor was integrated in the probe and the forward-view photoacoustic endoscope was realized with a wide field-of-view high-resolution photoacoustic imaging [2425]. The fabrication of fiber bundle imaging with transparent Fabry-Perot sensor is complex and costly [2627]. However, the emerging micro-electro-mechanical system (MEMS) scanners possess significant advantages of miniaturization and fast response, which have achieved notable effect in photoacoustic microscopy [2832]. Compared with the electromagnetic MEMS whose volume is relatively large and the electrostatic MEMS whose scanning range is limited, electrothermal MEMS possesses properties of compact size, low cost, and larger deflection angles with relatively lower voltage driving, which is the preferred scanner for miniaturized probes in clinical applications of abdominal cavity [3335]. However, the electrothermal MEMS response is nonlinear due to the thermal drift, especially at low voltages. In addition, the nonlinear response is aggravated by the fast-round-trip scanning with the speed at both ends equal to zero. Therefore, an inevitable nonlinear distortion was occurred in PAL imaging, which is an urgent issue at present [3638].

In this paper, we proposed a nonlinear distortion correction in real-time displayed PAL. An adaptive resampling method (ARM) was introduced to adaptively calibrate the projection of non-uniform scanning region and match the projection of uniform scanning region. The maximum amplitude projection (MAP) of a photoacoustic image had been decomposed in the vertical directions and divided into segments. It should be noted that the angular scan of beam corresponds to the linear scan of focused light spot. The ideal average scan speed per segment was calculated as a judgement for the division between non-uniform scanning region and uniform scanning region. The scan speed curve in a B-scan was estimated by fitting the average scan speeds along the same direction. The scanning feature curve of electrothermal MEMS in one direction was obtained assess using a suitable fitting function to fit the scan speed curves. ARM has the characteristics of low time complexity and high portability, which had been applied to the process of image reconstruction for real-time display without post-processing. The distortion was corrected to micron level within a delay less than 2 seconds. By employing the structural similarity, the effective of this method was verified, and the structural similarity was up to 96.82% in local subset, which indicated ARM provided precise images for real-time display in electrothermal-MEMS-based PAL.

2. Materials and methods

2.1 Photoacoustic laparoscopy system

Figure 1(a) shows the schematic of the entire optical-resolution PAL system. The laser source (532 nm, DTL-314QT, Russia) provides laser pulses of repetition rate (up to 10 kHz) with pulse width of ∼7 ns. The laser pulses are monitored by a photodiode and filtered to be coupled to a single mode fiber (core diameter ∼9 µm) through a fiber coupler (PAF-X-7-A, Thorlabs), and eventually, the laser pulses are transmitted to the target after passing through the probe. Inside the probe, a two-axis electrothermal MEMS scanner (WM-LS-3.1, WiO Tech) is actuated by field programmable gate array (FPGA). PA signals are amplified by an amplifier (LNA-650, RF Bay) and then digitized with a data acquisition card (100 MHz, M3i.4110, Spectrum). All depth-resolved A-lines are processed using a Hilbert transform and directly back-projected to a Cartesian coordinate. The reconstruction of the MAP image is calibrated with ARM in the sequence and then return to the computer for real-time display. And the optical signals are processed by video acquisition card (VAQ) for an optical image. Figure 1(b) shows the photo of the probe with a medical stainless-steel tube (18 mm outer diameter, 10 cm length) outside, which provides good biocompatibility. An endoscopy CCD camera (diameter ∼2.5 mm) whose visual field is on the same focal plane with PAI, was integrated into the side of the probe. And the tube was screwed on the entire laparoscope housing (30 cm length, with an 8 cm handle). Figure 1(c) shows the structure of the PAL probe. A right-angle reflective prism (1.5 mm) is used to redirect the collimated beams from a collimator (CFC-5X-A, Thorlabs) to the MEMS scanner, and then focused on the sample surface through a plano-convex len (diameter ∼6 mm, focal length 11.5 mm). The PA signal reflected by the cover glass (0.1 mm thickness) was detected by the 10 MHz flat ultrasound transducer (active area 3.5 mm diameter, bandwidth of 74%). The space among cover glass, ultrasound transducer and imaging window (1 mm thickness) is sealed with water.

 figure: Fig. 1.

Fig. 1. An overview of the PAL system. (a) Schematic of the PAL system. L1-L2, convex lens; PH, pinhole; BS, beam splitter; PD, photodiode; SMF, single mode fiber; AMP, amplifier; DAQ, data acquisition card; VAQ, video acquisition card; FPGA, field programmable gate array. (b) Photo of the assembled imaging probe, T, tube; CL, collimation lens; P, prism; MEMS, micro-electro-mechanical system; CC, CCD camera; UT, ultrasound transducer; IW, imaging window. (c) The schematic of the imaging probe marked by dashed frame in (a) and the scanning trace in IW, L, plano-convex len; CG, cover glass. (d) The lateral resolution. (e) The axial resolution.

Download Full Size | PDF

The performance of the PAL system was characterized by quantitatively measuring the spatial resolution and imaging speed. A sharp-edged surgical blade was used as the sample for resolution experiments. The lateral resolution was estimated by calculating the full width at half maximum (FWHM) of the line spread function (LSF) curve which was derived from edge spread function (ESF) curve, as shown in Fig. 1(d). The axial resolution was estimated by calculating the FWHM of the A-line signal envelope that along the depth direction of the blade edge, as shown in Fig. 1(e). At laser pulses repetition rate 10 kHz, the B-mode imaging rate was 20 Hz, and the corrected image displayed within a delay range of 1.2 seconds to 2 seconds on a typical workstation (Intel Core i5-7500, 3.4 GHz, 4 cores and 8 GB RAM). The predetermined pixels along X and Y axes of the MAP image were set as 200 × 200 pixels over imaging range of 1.2 × 0.8 mm2, which determines the frame rate 0.1 Hz of the image.

2.2 Nonlinear distortion simulation

In a MAP image, the pixels are uniformly arranged on the premise that the distribution of PA signals on the imaging surface is even. The main reason of nonlinear distortion is that the scan speed is uneven for the whole image. As the scan speed increases or decreases, the distortion occurs in subregions where the sampling is relatively deficiency or overflow. The distorted image performs visual deviation of pixels, which is divided into barrel distortion and pincushion distortion according to the deviation direction [3940]. To illustrate the relativity between the nonlinear distortion and scan speed, the Computer Vision toolbox of MATLAB (R2020a for academic) was used to simulate the phenomenon. A black and white array was selected as the input image, and different scan speeds V1, V2, V3 were designed to sample the image, as shown in Fig. 2 (V is the scan speed of focused light spot in the B-scan, which consists of scan speed v per pixel, V = [v1, v2, v3vn]T). The sampling matrix Mi corresponding to V1, V2, V3 is processed as follow:

$${M_i} = {V_i} \cdot V_i^T,i = 1,2,3$$
Figure 2(a) is the image under uniform scanning and there is no distortion in the output. The feature curve in the red dashed frame indicates that the black and white grids in the result are consistent without deformation. Figures 2(b)-(c) show that the pixels are converged to the center in barrel distortion, and the pixels are diverged from the center in pincushion distortion.

 figure: Fig. 2.

Fig. 2. The resulting image corresponding to three different scan speed. (a) Under uniform scanning, no distortion. (b) With low scan speed in the center and high scan speed at the edge, barrel distortion. (c) With high scan speed in the center and low scan speed at the edge, pincushion distortion.

Download Full Size | PDF

The feature curves in the red dashed frames correspondingly illustrate that the black and white grids are relatively enlarged in subregions. The common feature of both Figs. 2(b) and (c) is that the distortion increases with increasing or decreasing the scan speed.

2.3 Nonlinear distortion correction method

In the nonlinear distortion correction, the deviation matrix D is utilized to correct the deviation of pixels between the distorted imaging matrix Din and the ideal imaging matrix Dideal. A grating resolution chart is used as reference to estimate the deviation matrix D. The origin (m0 = 0, n0 = 0; m’0 = 0, n’0 = 0) is taken at the upper left corner. For the pixel in position (m, n) in matrix Din, there always exist a corresponding position (m’, n’) in matrix Dideal. Matrix D is orthogonal decomposed into matrixes Dx, Dy in X and Y axes, respectively, which is expressed as:

$$|{D(m,n)} |\textrm{ = }\sqrt {{{|{{D_x}(m,n)} |}^2}\textrm{ + }{{|{{D_y}(m,n)} |}^2}}$$
where Dx(m, n), Dy(m, n) are the components of deviation in X and Y axes, which are expressed as:
$$\left\{ \begin{array}{l} |{{D_x}(m,n)} |= T \cdot \sum\limits_{i = 1}^m {{v_x}(i,n)} - T \cdot m^{\prime} \cdot {{v^{\prime}}_x}\\ |{{D_y}(m,n)} |= T \cdot \sum\limits_{j = 1}^n {{v_y}(m,j)} - T \cdot n^{\prime} \cdot {{v^{\prime}}_y} \end{array} \right.,$$
where T is constant time, scan speed vx(i, n) and vy(m, j) along X and Y axes corresponding to the position (i, n), (m, j) in the distorted image, respectively, scan speed v’x and v’y are constant along X and Y axes in the ideal image (1 ≤ i ≤ m, 1 ≤ j ≤ n, i, j is integer). The grating resolution chart has divided the distorted image into segments along X and Y axes, respectively. As the segment is short enough, the average scan speed is regarded as scan speed v(m, n), which is estimated as:
$$\hat{v}(m,n) \approx \frac{{{D_{seg}}}}{{{N_{seg}}T}},$$
where T is constant time, Nseg is number of pixels in segment distance Dseg. The scan speeds in X and Y axes are estimated from Eq. (4), and the deviation matrix D can be calculated from Eq. (2) and Eq. (3). The positions of pixels could be calibrated by the deviation matrix D, and the new pixel value is determined by the resample method. According to the referenced resolution chart, ARM is designed as a negative feedback in the closed loop, which can adaptively distinguish between uniform scanning region and non-uniform scanning region and calibrate the distortion in X and Y axes respectively. Figure 3(a) shows the flow chart of ARM in X axis. Figure 3(b) presents the sketch of nonlinear distortion correction process for a distorted image. The process of ARM is divided into the following steps:

 figure: Fig. 3.

Fig. 3. (a) The flow chart of ARM in X axis. (b) The sketch of nonlinear distortion correction process.

Download Full Size | PDF

Step one, the MAP of distorted image is decomposed in X and Y axes. The B-scans in X axis are divided into segments, which are distinguished according to the intensity of PA signals. The division precision depends on the line width, pitch of the grating and the size of the focus light spot. It is assumed that uniform scanning region and non-uniform scanning region are accurately divided by appropriate grating.

Step two, the segments are classified as uniform scanning segments or non-uniform scanning segments. The ideal number of pixels in segment N’seg is used as the criterion for the judgement that whether a segment belongs to uniform scanning, which is defined as follows:

$$N{^{\prime}_{seg}} = \left[ {\frac{N}{n}} \right]$$
where n is number of segments in a B-scan, N is the number of pixels in a B-scan, operator [·] means rounding. If Nseg∈[N’seg -σ, N’seg +σ], the current segment is judged to belong to uniform scanning; if Nseg>[N’seg +σ], the current segment is judged to belong to non-uniform scanning.

Step three, the segments in uniform scanning is used as the standard to calibrate segments in non-uniform scanning in the same data set. The expansion coefficient Cx, Cy are introduced to estimate the distorted degree of segments in non-uniform scanning, which are expressed as:

$${C_x} = \frac{{{N_{seg}} \cdot \sum\limits_{\Delta m = 0}^{{N_{seg}}} {I(m + \Delta m,n)} }}{{{{N^{\prime}}_{seg}} \cdot \sum\limits_{\Delta m = 0}^{{{N^{\prime}}_{seg}}} {I^{\prime}(m^{\prime} + \Delta m,n^{\prime})} }},{C_y}(n) = \frac{{{N_{seg}} \cdot \sum\limits_{\Delta n = 0}^{{N_{seg}}} {I(m,n + \Delta n)} }}{{{{N^{\prime}}_{seg}} \cdot \sum\limits_{\Delta n = 0}^{{{N^{\prime}}_{seg}}} {I^{\prime}(m^{\prime},n^{\prime} + \Delta n)} }}.$$
where I(m+Δm, n), I(m, n+Δn), I’(m’+Δm, n’) and I’(m’, n’+Δn) are the pixel value corresponding to pixel P(m+Δm, n), P(m, n+Δn), P’(m’+Δm, n’), P(m’, n’+Δn) in non-uniform scanning and uniform scanning, respectively. The cubic spline interpolation is chosen as upsampling method and weighted average is chosen as the downsampled method to resample the segments in non-uniform scanning region. The cubic spline function F(m) in the interval mi< m < mi+1 (i is integer) is expressed as follows:
$$\begin{array}{c} F(m) = {a_i} + {b_i}(m - {m_i}) + {c_i}{(m - {m_i})^2} + {d_i}{(m - {m_i})^3}, \\ {a_i} = I({m_i}), {b_i} = \frac{{I({m_{i + 1}}) - I({m_i})}}{{{m_{i + 1}} - {m_i}}} - \frac{{{m_{i + 1}} - {m_i}}}{2}\ddot{I}({m_i}) - \frac{{{m_{i + 1}} - {m_i}}}{6}({\ddot{I}({m_{i + 1}}) - \ddot{I}({m_i})} ),{c_i} = \frac{{\ddot{I}({m_i})}}{2},{d_i} = \frac{{\ddot{I}({m_{i + 1}}) - \ddot{I}({m_i})}}{{6({m_{i + 1}} - {m_i})}} \end{array}$$
where Ï(m) is the value of quadratic differential. After interpolation, the new pixel value I (m, n) in non-uniform scanning region is obtained by weighted average, which is defined as follows:
$$\left\{ \begin{array}{l} I(m,n) = \left[ {\sum\limits_{j = 0}^{{{[C ]} \mathord{\left/ {\vphantom {{[C ]} 2}} \right.} 2}} {\frac{{{{[C ]} \mathord{\left/ {\vphantom {{[C ]} 2}} \right.} 2} - j}}{{[C ]}}I(m + j,n) + \sum\limits_{j = 0}^{{{[C ]} \mathord{\left/ {\vphantom {{[C ]} 2}} \right.} 2}} {\frac{{{{[C ]} \mathord{\left/ {\vphantom {{[C ]} 2}} \right.} 2} - j}}{{[C ]}}I(m - j,n)} } } \right]\frac{{[C ]}}{{\sum\limits_{j = 0}^{{{[C ]} \mathord{\left/ {\vphantom {{[C ]} 2}} \right.} 2}} {2j} }},[C ]\textrm{ is even}\\ I(m,n) = \left[ {\sum\limits_{j = 0}^{{{[{C + 1} ]} \mathord{\left/ {\vphantom {{[{C + 1} ]} 2}} \right.} 2}} {\frac{{{{[{C + 1} ]} \mathord{\left/ {\vphantom {{[{C + 1} ]} 2}} \right.} 2} - j}}{{[C ]}}I(m + j,n) + \sum\limits_{j = 0}^{{{[{C - 1} ]} \mathord{\left/ {\vphantom {{[{C - 1} ]} 2}} \right.} 2}} {\frac{{{{[{C - 1} ]} \mathord{\left/ {\vphantom {{[{C - 1} ]} 2}} \right.} 2} - j}}{{[C ]}}I(m - j,n)} } } \right]\frac{{[C ]}}{{\sum\limits_{j = 0}^{{{[C ]} \mathord{\left/ {\vphantom {{[C ]} 2}} \right.} 2}} {2j} + {{[{C + 1} ]} \mathord{\left/ {\vphantom {{[{C + 1} ]} 2}} \right.} 2}}},[C ]\textrm{ is odd} \end{array} \right.$$
where [C] is expansion coefficient C rounded up, I(m + j,n) and I(m-j,n) are the pixel value corresponding to the position (m + j,n), (m-j,n). Furthermore, the expansion coefficient C is a criterion to determine whether segments in non-uniform scanning region have been effectively corrected. If C∈[1-σ, 1+σ], the segment is judged to be corrected; if C>[1+σ], the segment is judged to be recalibrated. The corrected non-uniform scanning region is combined with uniform scanning region after all segments have been calibrated. The distortion in X axis has been corrected, and deviation matrix Dx is obtained.

Step four, repeat the step one to step three to correct the distortion in Y axis and calculate the deviation matrix Dy. A uniform image is obtained after the correction in X and Y axes, respectively. The nonlinear distortion has been removed from the original image. And the deviation matrix D is obtained to correct any distorted image of the electrothermal MEMS scanner.

It should be noted that the method described here could be used to characterize the nonlinear distortion caused by any electrothermal MEMS scanner; the parameter setting might be different in each case. The utility of this method is that it is completely independent of a given system and provide correction without post-processing.

3. Experiments and results

3.1 Analysis and validation in phantom study

The phantom experiments involved two parts: the first part was the staged results illustration and analysis in the correction process; the second part proved that all directions of the distorted image have been corrected.

To illustrate the correction process of ARM in one direction, the grating resolution chart (line width of 10 µm, pitch of 20 µm) was utilized as a “ruler” that accurately estimate distortion in segments in Y axis. The nonlinear distortion correction of B-scan is shown in Fig. 4. The original B-mode image of the grating, which was divided into segments with a 30-µm scale, is shown in the upside of Fig. 4(a). And the corresponding MAP image, which illustrates the distribution of PA signals in segments, is shown in the downside of Fig. 4(a). Figure 4(b) shows the MAP of B-scan after ARM. To better exhibit the performance of ARM, the areas marked by purple frames in Figs. 4(a) and (b) were enlarged and shown in Figs. 4(c) and (d), respectively. The segments in non-uniform scanning region were highlighted in the red dashed frames in Figs. 4(c) and (d), respectively. To illustrate the correction effect on scan speed of focused light spot, the comparison of average scan speed corresponding to segments in Figs. 4(c) and (d) is shown in Fig. 4(e). Statistics reveal normalized average scan speeds of segments in non-uniform scanning region tend to be consistent after correction. Segments S6, S7, S11 were selected as representatives of non-uniform scanning and uniform scanning, respectively, and the normalized speed ratio was calculated by comparing to the ideal average scan speed, as shown in Fig. 4(f). The corrected average scan speeds in non-uniform scanning region were approach to uniform scanning region. Table 1 shows the expansion coefficient Cy of segments in Figs. 4(c) and (d). Statistics reveal the distorted degree of segments in non-uniform scanning region is approach to zero.

 figure: Fig. 4.

Fig. 4. The validation of the proposed correction method for one axial distortion. (a) The MAP of the original B-mode image, upside inset: the original B-mode image. (b) The corrected MAP after ARM. (c)-(d) The enlarged regions of the purple frames in (a) and (b). (e) The statistics of normalized average scan speed in segments from (c)-(d). (f) The expansion coefficient Cy of the chosen segments S6, S7, S11 in (c)-(d).

Download Full Size | PDF

Tables Icon

Table 1. The expansion coefficient Cy of segments before and after correction.

The scan speed curves of selected B-scans in X axis are shown in Fig. 5(a). The high similarity between the scan speed curves demonstrates the scanning between X and Y axes is non-interfering. Furthermore, to illustrate that average scan speeds in the same segment of B-scans are coincident, the statistical analysis of average scan speeds in segments S4, S16, S24, S33, S40 was established, as shown in Fig. 5(b). The results are from ten randomly selected B-scans. The scanning feature curve of electrothermal MEMS scanner in Y axis was obtained by fitting the scan speed curves, as shown in Fig. 5(c). The fitted curve residual is within 0.04, square of the correlation coefficient (R-square) is 0.421, sum of squares regression (SSR) is 0.479 and root mean square error (RMSE) is 0.107. As the distribution of average scan speeds in B-scan is close to the Gaussian distribution, Gauss function is the suitable fitting function. And all these parameters illustrate the high similarity between Gauss fitting curve and raw data. In addition, the feature curve shows relative low speed in the center and the edge of both sides, which absolutely fits the practical movement of MEMS. And the scanning feature in X axis could be obtained in the same way.

 figure: Fig. 5.

Fig. 5. The validation of the independence between X and Y axes. (a) The scanning field in fast axis is consisted of scan speed curves corresponding to different voltage of slow axis. (b) The comparison of average scan speed (ASS) in segment of S4, S16, S24, S33, S40. (c) Upside: The Gauss fitting of scan speed curves, downside: residual error.

Download Full Size | PDF

To illustrate the correction in X and Y axes respectively, a grid resolution chart (line-width of 30 µm, pitch of 60 µm) was used as target. The correction is based on the fact that the two scanning axes X, Y are relatively independent. Figure 6(a) shows the original MAP image. The image of distorted resolution chart was corrected gradually, as shown in Figs. 6(a)-(c). To illustrate the correction of sub-region, the distorted degree of sub-grids in grey dashed frames of Figs. 6(a)-(c) was obtained and illustrated by heat-map analysis, as shown in Figs. 6(d)-(f). The heat-maps provide an intuitive reflection of sub-grids variation and the number is corresponding to each sub-grid in pixels. The results in Figs. 6(d)-(f) illustrate that the center of the original image suffered maximum distortion, and the center-sub-grid is indicated as red in Fig. 6(d). After correction in X and Y axes, total sub-grids tend to be uniform and appear as the same colour, as shown in Fig. 6(f). Figures 6(g)-(i) show the expansion coefficient (C(X, Y) =Cx·Cy) statistic of sequentially numbered sub-grids in Figs. 6(a)-(c), respectively (counted from top to bottom, left to right). The expansion coefficient C(X, Y) illustrates the correction effect of accumulating two vertical dimensions. The statistics of Fig. 6(i) illustrate that the sub-grids of the non-uniform scanning region have matched uniform scanning region in both the sampling number and the pixel value. Meanwhile, an image comparison method based on structural similarity was utilized to quantify the effect of correction [4142]. The comparison process involved the binaryzation of the three images for first, and calculated the similarity with a regularization parameters array equal to [0.01, 0.03, 0.015]. It is verified that the similarity between the corrected image and the optical image is up to 96.82% in local subset, and the similarity with the galvanometer scanning image is 95.10% in local subset, while the global structural similarity is improved 5.82 times in average. The galvanometer scanning system used the same scanning mode and had similar lateral resolution.

 figure: Fig. 6.

Fig. 6. The validation of image correction processing. (a) The original MAP image. (b)-(c) Images after the correction in X, Y axes, respectively. The dashed frame highlights the complete grids parts. (d)-(f) The heat-map analysis of sub-grids in dashed frames from (a)-(c), respectively. (g)-(i) The expansion coefficient C (X, Y) of each sequentially numbered sub-grid corresponding to (a)-(c), respectively. The color bars indicate the PA amplitude of (a)-(c) and the number of pixels of (d)-(f). Scale bar: 100 µm.

Download Full Size | PDF

To illustrate the reliability of correction for all directions, a concentric circular resolution chart (line width of 10 µm, pitch of 20 µm, center circle of ∼20 µm) was utilized. Figures 7(a)-(b) are the images before and after correction, respectively. L1, L2, L3 and L4 in Fig. 7(a) correspond to L’1, L’2, L’3 and L’4 in Fig. 7(b) of the same angle, respectively. Each line was taken from the origin to the edge of the image and the corresponding length was different for different angles. The profiles along those specify directions L1-L4, L’1- L’4 were extracted in pixels, as shown in Figs. 7(c)-(d). The image profiles show high pixel value on the line and low pixel value on the pitch, which was used to illustrate the pitch between concentric circles of the image. The results illustrate that the omnidirectional pitches were corrected uniformly, and the obvious distortion near the center of the circle and the edge of the image is removed. It should be noted that the results in Figs. 7(c)-(d) have angular errors due to measurement and pixel value errors affected by the brightness and darkness of the images.

 figure: Fig. 7.

Fig. 7. The verification of correction in all direction. (a) The original MAP image. (b) The corrected image of (a). (c) The evenness degree in angles of L1, L2, L3 and L4 in (a). (d) The evenness degree in angles of L’1, L’2, L’3 and L’4 in (b). L1, L2, L3 and L4 are lines in different angles starting from the center of the circle to the edge of the image, corresponding to the same angles of L’1, L’2, L’3 and L’4. The color bar indicates the PA amplitude of (a) and (b). Scale bar: 100 µm.

Download Full Size | PDF

3.2 Application in vivo study

To show the practicability of nonlinear distortion correction in laparoscopy imaging, a rat (female Wistar rat 250∼300 g) was involved for in vivo experiments. The animal experiments were conducted under the approval of the South China Normal University. Before the experiments, the rat was injected pentobarbital sodium (0.3 g/ml) intraperitoneally and placed on a thermostatic stage. Once the rat was properly anesthetized, the abdominal cavity of the rat was softly opened by a surgical blade to expose the internal organs. To illustrate the advantage of combining PA imaging with optical imaging, the PAL probe was used for intestine surface imaging in Fig. 8. The photos from CCD camera were used as a contrast for PA images, as shown in Figs. 8(a) and (f). It should be noted that there is a certain tilt degree difference between the imaging surface of the dashed frames in Fig. 8(a) as the intestine is flexible and sinuous. To illustrate the correction effect in vivo, the original MAP images corresponding to the dashed frames in Fig. 8(a) were shown in Figs. 8(b) and (d), and the corrected images were shown in Figs. 8(c) and (e), respectively. The results in Figs. 8(b)-(e) illustrate the shape, curvature and thickness of the vessels pointed by the white arrows have obvious changes after correction. A large field motor scanning image shown in Fig. 8(g) was employed as a reference to the flexible and miniaturized PAL. To illustrate the nonlinear distortion correction has no directivity, sub-images in different scanning directions were selected and spliced together, which provided direct comparison to Fig. 8(g), as shown in Fig. 8(h). The result illustrates that the combination of the three sub-images matches the background of black-white Fig. 8(g) well, as shown in Fig. 8(h). It should be noted that there is a certain difference between the image scales of motor and electrothermal MEMS due to the motor scanning probe has a certain distance from the intestine and the imaging surface of the curved intestine cannot be completely flattened. However, the relative position of the vessels is changeless, and the sub-images were properly scaled to match the corresponding position of the background. The optical image from CCD camera exists distortion caused by the overlap of projection of three dimensional object mapped to two dimensional plane, which affects the observation of microvascular network on the bent and folded intestine. Due to the PAL probe being completely attached to the intestine for PA imaging, the flatten plane information with high optical contrast was complemented to compensate the optical image, which performs to complement distribution and density of vessels information on a curved surface.

 figure: Fig. 8.

Fig. 8. The validation of vessels morphology correction in vivo imaging of rat intestines surface. (a) and (f) The optical images from CCD camera. (b) and (d) The MAP images of the raw data corresponding to the dashed frames in (a). (c) and (e) The MAP images after correction corresponding to (b) and (d), respectively. White triangle arrow points to the corrected vessels. (g) The MAP image under motor scanning of the region in (f). (h) Three corrected sub-images in the background of black-white image (g). The color bar indicates the PA amplitude of (b)-(e), (g) and (h). Scale bar, 200 µm.

Download Full Size | PDF

4. Discussion and conclusion

The fast forward-scanning electrothermal-MEMS-based PAL is established in this work which could capture the signals with high speed while induced the distortion caused by deficient heat release. As shown above, ARM (adaptive resampling method) is designed to compensate the nonlinear response of the electrothermal MEMS and correct the unexpected distortion in imaging. In the correction process, the distortion is decomposed into two perpendicular directions; for each direction, the distortion region is split into B-scans; the B-scan is divided into segments to obtain the uniform scanning segments and non-uniform scanning segments. And the deviation matrix D is obtained through decomposing the distortion to the minimum for calibration, which could correct any distorted images of the electrothermal MEMS.

Phantom experiments illustrate the distortion has been corrected in all directions. Using an image structural comparison method, it is verified that the structural similarity between the corrected image and the optical image is 96.82% in local subset, and the similarity with the galvanometer scanning image is 95.10% in local subset. However, the galvanometer is too large to be used in PAL, and the corrected electrothermal MEMS is suitable for PAL. The corrected image displayed within a delay range of 1.2 seconds to 2 seconds. The correction has been proved to be effectively applied in experiments in vivo, and precise images of vessels on the curved intestinal surface of rat were obtained. The morphology of vessels determined from the correction images has the prospect to support precise medical surgery and treatment. And the correction process was synchronized with the signal acquisition and reconstruction for real-time display, which provides theoretical and practical support for the application of electrothermal-MEMS-based PAL. Furthermore, it is possible to obtain the distribution and density of microvascular network in real-time imaging through increasing the laser repetition rate and expanding the scanning angles afterwards. And the developed PAL with high speed signals acquisition is promising in providing assistance in the diagnosis of minimal lesions or real-time blood flow changes during surgery.

The advantage of our method is that it has universal applicability and low time complexity, which was displayed in real-time on the front end of the imaging system and could be used to adaptively correct the nonlinear distortion of images. More importantly, we proposed a solution to the nonlinear motion of the scanning equipment, which can effectively remove the image distortion.

Funding

National Natural Science Foundation of China (11774101, 61627827, 61822505, 81630046); Science and Technology Planning Project of Guangdong Province (2015B020233016); China Postdoctoral Science Foundation (2019M652943); Natural Science Foundation of Guangdong Province (2019A1515011399); Guangzhou Science and Technology Program key projects (201607010371, 2019050001).

Disclosures

The authors declare no conflicts of interest.

References

1. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012). [CrossRef]  

2. M. Omar, J. Aguirre, and V. Ntziachristos, “Optoacoustic mesoscopy for biomedicine,” Nat. Biomed. Eng. 3(5), 354–370 (2019). [CrossRef]  

3. M. M. Chen, H. J. Knox, Y. Q. Tang, W. Liu, L. M. Nie, J. Chan, and J. J. Yao, “Simultaneous photoacoustic imaging of intravascular and tissue oxygenation,” Opt. Lett. 44(15), 3773–3776 (2019). [CrossRef]  

4. Q. Zhao, R. Q. Lin, C. B. Liu, J. X. Zhao, G. T. Si, L. Song, and J. Meng, “Quantitative analysis on in vivo tumor-microvascular images from optical-resolution photoacoustic microscopy,” J. Biophotonics 12(6), e201800421 (2019). [CrossRef]  

5. Z. Y. Yang, J. H. Chen, J. J. Yao, R. Q. Lin, J. Meng, C. B. Liu, J. H. Yang, X. Li, L. H. Wang, and L. Song, “Multi-parametric quantitative microvascular imaging with optical-resolution photoacoustic microscopy in vivo,” Opt. Express 22(2), 1500–1511 (2014). [CrossRef]  

6. N. Wu, S. Q. Ye, Q. S. Ren, and C. H. Li, “High-resolution dual-modality photoacoustic ocular imaging,” Opt. Lett. 39(8), 2451–2454 (2014). [CrossRef]  

7. S. S. Tang, J. Chen, P. Samant, K. Stratton, and L. Z. Xiang, “Transurethral photoacoustic endoscopy for prostate cancer: a simulation study,” IEEE Trans. Med. Imaging 35(7), 1780–1787 (2016). [CrossRef]  

8. H. L. He, A. Stylogiannis, P. Afshari, T. Wiedemann, K. Steiger, A. Buehler, C. Zakian, and V. Ntziachristos, “Capsule optoacoustic endoscopy for esophageal imaging,” J. Biophotonics 12(10), e201800439 (2019). [CrossRef]  

9. Y. Li, G. X. Lu, J. J. Chen, J. C. Jing, T. C. Huo, R. M. Chen, L. M. Jiang, Q. F. Zhou, and Z. P. Chen, “PMN-PT/Epoxy 1-3 composite based ultrasonic transducer for dual-modality photoacoustic and ultrasound endoscopy,” Photoacoustics 15, 100138 (2019). [CrossRef]  

10. K. D. Xiong, S. H. Yang, X. W. Li, and D. Xing, “Autofocusing optical-resolution photoacoustic endoscopy,” Opt. Lett. 43(8), 1846–1849 (2018). [CrossRef]  

11. Y. Cao, J. Hui, A. Kole, P. Wang, Q. Yu, W. Chen, M. Sturek, and J. X. Cheng, “High-sensitivity intravascular photoacoustic imaging of lipid-laden plaque with a collinear catheter design,” Sci. Rep. 6(1), 25236 (2016). [CrossRef]  

12. H. Lei, L. A. Johnson, K. A. Eaton, S. C. Liu, J. Ni, X. D. Wang, P. D. R. Higgins, and G. Xu, “Characterizing intestinal strictures of Crohn’s disease in vivo by endoscopic photoacoustic imaging,” Biomed. Opt. Express 10(5), 2542–2555 (2019). [CrossRef]  

13. N. Liu, S. H. Yang, and D. Xing, “Photoacoustic and hyperspectral dual-modality endoscope,” Opt. Lett. 43(1), 138–141 (2018). [CrossRef]  

14. W. C. Huang, R. H. Chen, Y. Peng, F. Duan, Y. F. Huang, W. S. Guo, X. Y. Chen, and L. M. Nie, “In vivo quantitative photoacoustic diagnosis of gastric and intestinal dysfunctions with a broad pH-responsive sensor,” ACS Nano 13(8), 9561–9570 (2019). [CrossRef]  

15. B. Wang, J. L. Su, J. Amirian, S. H. Litovsky, R. Smalling, and S. Emelianov, “Detection of lipid in atherosclerotic vessels using ultrasound-guided spectroscopic intravascular photoacoustic imaging,” Opt. Express 18(5), 4889–4897 (2010). [CrossRef]  

16. K. Galaal, H. Donkers, A. Bryant, and A. D. Lopes, “Laparoscopy versus laparotomy for the management of early stage endometrial cancer,” Cochrane Database of Syst. Rev. 10(9), CD006655 (2018). [CrossRef]  

17. D. Fuks, F. Cauchy, S. Ftériche, T. Nomi, L. Schwarz, S. Dokmak, O. Scatton, G. Fusco, J. Belghiti, B. Gayet, and O. Soubrane, “Laparoscopy Decreases Pulmonary Complications in Patients Undergoing Major Liver Resection,” Ann. Surg. 263(2), 353–361 (2016). [CrossRef]  

18. N. Bird, M. Elmasry, R. Jones, M. Elniel, M. Kelly, D. Palmer, S. Fenwick, G. Poston, and H. M. Bird, “Role of staging laparoscopy in the stratification of patients with perihilar cholangiocarcinoma,” Br. J. Surg. 104(4), 418–425 (2017). [CrossRef]  

19. X. J. Dai, H. Yang, T. Q. Shan, H. K. Xie, S. A. Berceli, and H. B. Jiang, “Miniature endoscope for multimodal imaging,” ACS Photonics 4(1), 174–180 (2017). [CrossRef]  

20. J. Rebling, F. J. O. Landa, X. L. Deán-Ben, A. Douplik, and D. Razansky, “Integrated catheter for simultaneous radio frequency ablation and optoacoustic monitoring of lesion progression,” Opt. Lett. 43(8), 1886–1889 (2018). [CrossRef]  

21. L. Xi, X. Q. Li, L. Yao, S. Grobmyer, and H. B. Jiang, “Design and evaluation of a hybrid photoacoustic tomography and diffuse optical tomography system for breast cancer detection,” Med. Phys. 39(5), 2584–2594 (2012). [CrossRef]  

22. Y. Li, Z. K. Zhu, J. C. Jing, J. J. Chen, A. E. Heidari, Y. M. He, J. Zhu, T. Ma, M. Y. Yu, Q. F. Zhou, and Z. P. Chen, “High-speed integrated endoscopic photoacoustic and ultrasound imaging system,” IEEE J. Sel. Top. Quantum Electron. 25(1), 1–5 (2019). [CrossRef]  

23. M. Yang, L. Y. Zhao, X. J. He, N. Su, C. Y. Zhao, H. W. Tang, T. Hong, W. B. Li, F. Yang, L. Lin, B. Zhang, R. Zhang, Y. X. Jiang, and C. H. Li, “Photoacoustic/ultrasound dual imaging of human thyroid cancers: an initial clinical study,” Biomed. Opt. Express 8(7), 3449–3457 (2017). [CrossRef]  

24. R. Ansari, E. Z. Zhang, A. E. Desjardins, and P. C. Beard, “All-optical forward-viewing photoacoustic probe for high-resolution 3D endoscopy,” Light: Sci. Appl. 7(1), 75 (2018). [CrossRef]  

25. G. Y. Li, Z. D. Guo, and S. L. Chen, “Miniature probe for forward-view wide-field optical-resolution photoacoustic endoscopy,” IEEE Sens. J. 19(3), 909–916 (2019). [CrossRef]  

26. R. Ansari, E. Z. Zhang, A. E. Desjardins, A. L. David, and P. C. Beard, “Use of a flexible optical fibre bundle to interrogate a Fabry–Perot sensor for photoacoustic imaging,” Opt. Express 27(26), 37886–37899 (2019). [CrossRef]  

27. H. Yuan, X. Li, and L. Que, “A transparent nanostructured optical biosensor,” J. Biomed. Nanotechnol. 10(5), 767–774 (2014). [CrossRef]  

28. Q. Chen, H. Guo, T. Jin, W. Z. Qi, H. K. Xie, and L. Xi, “Ultracompact high-resolution photoacoustic microscopy,” Opt. Lett. 43(7), 1615–1618 (2018). [CrossRef]  

29. M. Moothanchery, R. Z. Bi, J. Y. Kim, S. Jeon, C. Kim, and M. Olivo, “Optical resolution photoacoustic microscopy based on multimode fibers,” Biomed. Opt. Express 9(3), 1190–1197 (2018). [CrossRef]  

30. J. J. Yao, L. D. Wang, J. Yang, L. S. Gao, K. I. Maslov, L. V. Wang, C. H. Huang, and J. Zou, “Wide-field fast-scanning photoacoustic microscopy based on a water-immersible MEMS scanning mirror,” J. Biomed. Opt. 17(8), 080505 (2012). [CrossRef]  

31. S. L. Chen, Z. X. Xie, T. Ling, L. J. Guo, X. B. Wei, and X. D. Wang, “Miniaturized all-optical photoacoustic microscopy based on microelectromechanical systems mirror scanning,” Opt. Lett. 37(20), 4263–4265 (2012). [CrossRef]  

32. Q. Chen, H. K. Xie, and L. Xi, “Wearable optical resolution photoacoustic microscopy,” J. Biophotonics 12(8), e201900066 (2019). [CrossRef]  

33. J. W. Baik, J. Y. Kim, S. Cho, S. Choi, J. Kim, and C. Kim, “Super wide-field photoacoustic microscopy of animals and humans in vivo,” IEEE Trans. Med. Imaging 39(4), 975–984 (2020). [CrossRef]  

34. Q. F. Zhou and Y. Zhang, “Editorial for the special issue on MEMS technology for biomedical imaging Applications,” Micromachines 10(9), 615 (2019). [CrossRef]  

35. Z. Qiu and W. Piyawattanametha, “MEMS actuators for optical microendoscopy,” Micromachines 10(2), 85 (2019). [CrossRef]  

36. Y. Liu, Y. J. Feng, X. L. Sun, L. J. Zhu, X. Cheng, Q. Chen, Y. B. Liu, and H. K. Xie, “Integrated tilt angle sensing for large displacement scanning MEMS mirrors,” Opt. Express 26(20), 25736–25749 (2018). [CrossRef]  

37. F. T. Han, W. Wang, X. Y. Zhang, and H. K. Xie, “Modeling and control of a large-stroke electrothermal MEMS mirror for fourier transform microspectrometers,” J. Microelectromech. Syst. 25(4), 750–760 (2016). [CrossRef]  

38. Q. Chen, H. Guo, W. Z. Qi, Q. Gan, L. Yang, B. W. Ke, X. X. Chen, T. Jin, and L. Xi, “Assessing hemorrhagic shock: Feasibility of using an ultracompact photoacoustic microscope,” J. Biophotonics 12(4), e201800348 (2019). [CrossRef]  

39. R. Hartley and S. B. Kang, “Parameter-free radial distortion correction with center of distortion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1309–1321 (2007). [CrossRef]  

40. Y. Hou, H. Y. Zhang, J. Y. Zhao, J. He, H. Qi, Z. W. Liu, and B. Q. Guo, “Camera lens distortion evaluation and correction technique based on a colour CCD moiré method,” Opt. Lasers Eng. 110, 211–219 (2018). [CrossRef]  

41. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

42. T. Samajdar and M. I. Quraishi, “Analysis and evaluation of image quality metrics,” Information Systems Design and Intelligent Applications 340, 369–378 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. An overview of the PAL system. (a) Schematic of the PAL system. L1-L2, convex lens; PH, pinhole; BS, beam splitter; PD, photodiode; SMF, single mode fiber; AMP, amplifier; DAQ, data acquisition card; VAQ, video acquisition card; FPGA, field programmable gate array. (b) Photo of the assembled imaging probe, T, tube; CL, collimation lens; P, prism; MEMS, micro-electro-mechanical system; CC, CCD camera; UT, ultrasound transducer; IW, imaging window. (c) The schematic of the imaging probe marked by dashed frame in (a) and the scanning trace in IW, L, plano-convex len; CG, cover glass. (d) The lateral resolution. (e) The axial resolution.
Fig. 2.
Fig. 2. The resulting image corresponding to three different scan speed. (a) Under uniform scanning, no distortion. (b) With low scan speed in the center and high scan speed at the edge, barrel distortion. (c) With high scan speed in the center and low scan speed at the edge, pincushion distortion.
Fig. 3.
Fig. 3. (a) The flow chart of ARM in X axis. (b) The sketch of nonlinear distortion correction process.
Fig. 4.
Fig. 4. The validation of the proposed correction method for one axial distortion. (a) The MAP of the original B-mode image, upside inset: the original B-mode image. (b) The corrected MAP after ARM. (c)-(d) The enlarged regions of the purple frames in (a) and (b). (e) The statistics of normalized average scan speed in segments from (c)-(d). (f) The expansion coefficient Cy of the chosen segments S6, S7, S11 in (c)-(d).
Fig. 5.
Fig. 5. The validation of the independence between X and Y axes. (a) The scanning field in fast axis is consisted of scan speed curves corresponding to different voltage of slow axis. (b) The comparison of average scan speed (ASS) in segment of S4, S16, S24, S33, S40. (c) Upside: The Gauss fitting of scan speed curves, downside: residual error.
Fig. 6.
Fig. 6. The validation of image correction processing. (a) The original MAP image. (b)-(c) Images after the correction in X, Y axes, respectively. The dashed frame highlights the complete grids parts. (d)-(f) The heat-map analysis of sub-grids in dashed frames from (a)-(c), respectively. (g)-(i) The expansion coefficient C (X, Y) of each sequentially numbered sub-grid corresponding to (a)-(c), respectively. The color bars indicate the PA amplitude of (a)-(c) and the number of pixels of (d)-(f). Scale bar: 100 µm.
Fig. 7.
Fig. 7. The verification of correction in all direction. (a) The original MAP image. (b) The corrected image of (a). (c) The evenness degree in angles of L1, L2, L3 and L4 in (a). (d) The evenness degree in angles of L’1, L’2, L’3 and L’4 in (b). L1, L2, L3 and L4 are lines in different angles starting from the center of the circle to the edge of the image, corresponding to the same angles of L’1, L’2, L’3 and L’4. The color bar indicates the PA amplitude of (a) and (b). Scale bar: 100 µm.
Fig. 8.
Fig. 8. The validation of vessels morphology correction in vivo imaging of rat intestines surface. (a) and (f) The optical images from CCD camera. (b) and (d) The MAP images of the raw data corresponding to the dashed frames in (a). (c) and (e) The MAP images after correction corresponding to (b) and (d), respectively. White triangle arrow points to the corrected vessels. (g) The MAP image under motor scanning of the region in (f). (h) Three corrected sub-images in the background of black-white image (g). The color bar indicates the PA amplitude of (b)-(e), (g) and (h). Scale bar, 200 µm.

Tables (1)

Tables Icon

Table 1. The expansion coefficient Cy of segments before and after correction.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

M i = V i V i T , i = 1 , 2 , 3
| D ( m , n ) |  =  | D x ( m , n ) | 2  +  | D y ( m , n ) | 2
{ | D x ( m , n ) | = T i = 1 m v x ( i , n ) T m v x | D y ( m , n ) | = T j = 1 n v y ( m , j ) T n v y ,
v ^ ( m , n ) D s e g N s e g T ,
N s e g = [ N n ]
C x = N s e g Δ m = 0 N s e g I ( m + Δ m , n ) N s e g Δ m = 0 N s e g I ( m + Δ m , n ) , C y ( n ) = N s e g Δ n = 0 N s e g I ( m , n + Δ n ) N s e g Δ n = 0 N s e g I ( m , n + Δ n ) .
F ( m ) = a i + b i ( m m i ) + c i ( m m i ) 2 + d i ( m m i ) 3 , a i = I ( m i ) , b i = I ( m i + 1 ) I ( m i ) m i + 1 m i m i + 1 m i 2 I ¨ ( m i ) m i + 1 m i 6 ( I ¨ ( m i + 1 ) I ¨ ( m i ) ) , c i = I ¨ ( m i ) 2 , d i = I ¨ ( m i + 1 ) I ¨ ( m i ) 6 ( m i + 1 m i )
{ I ( m , n ) = [ j = 0 [ C ] / [ C ] 2 2 [ C ] / [ C ] 2 2 j [ C ] I ( m + j , n ) + j = 0 [ C ] / [ C ] 2 2 [ C ] / [ C ] 2 2 j [ C ] I ( m j , n ) ] [ C ] j = 0 [ C ] / [ C ] 2 2 2 j , [ C ]  is even I ( m , n ) = [ j = 0 [ C + 1 ] / [ C + 1 ] 2 2 [ C + 1 ] / [ C + 1 ] 2 2 j [ C ] I ( m + j , n ) + j = 0 [ C 1 ] / [ C 1 ] 2 2 [ C 1 ] / [ C 1 ] 2 2 j [ C ] I ( m j , n ) ] [ C ] j = 0 [ C ] / [ C ] 2 2 2 j + [ C + 1 ] / [ C + 1 ] 2 2 , [ C ]  is odd
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.