Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-step deep neural network for identifying subfascial vessels in a dorsal skinfold window chamber model

Open Access Open Access

Abstract

Automatic segmentation of blood vessels in the dorsal skinfold window chamber (DWSC) model is a prerequisite for the evaluation of vascular-targeted photodynamic therapy (V-PDT) biological response. Recently, deep learning methods have been widely applied in blood vessel segmentation, but they have difficulty precisely identifying the subfascial vessels. This study proposed a multi-step deep neural network, named the global attention-Xnet (GA-Xnet) model, to precisely segment subfascial vessels in the DSWC model. We first used Hough transform combined with a U-Net model to extract circular regions of interest for image processing. GA step was then employed to obtain global feature learning followed by coarse segmentation for the entire blood vessel image. Secondly, the coarse segmentation of blood vessel images from the GA step and the same number of retinal images from the DRIVE datasets were combined as the mixing sample, inputted into the Xnet step to learn the multiscale feature predicting fine segmentation maps of blood vessels. The data show that the accuracy, sensitivity, and specificity for the segmentation of multiscale blood vessels in the DSWC model are 96.00%, 86.27%, 96.47%, respectively. As a result, the subfascial vessels could be accurately identified, and the connectedness of the vessel skeleton is well preserved. These findings suggest that the proposed multi-step deep neural network helps evaluate the short-term vascular responses in V-PDT.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Vascular targeted photodynamic therapy (V-PDT) based on local destruction of vasculature is an effective therapeutic modality for treating vascular-related diseases, such as prostate cancer, age-related macular degeneration, and port-wine stain [14]. However, the therapeutic mechanisms and biological responses for vascular damage during V-PDT have not been fully explored. Previous studies showed that the biological response after V-PDT treatment could be measured based on the time-dependent change of blood vessel diameter [5,6]. For this, mice's dorsal skinfold window chamber (DSWC) is widely used as an in vivo model to evaluate the influence of dosimetric parameters on V-PDT efficacy, which includes photosensitizer concentration and light dose, tissue oxygen level, and singlet oxygen production [710]. In addition, the dynamic changes (i.e., morphology, branching pattern, density, and diameter) of targeted vasculatures were successfully monitored for optimizing the dosimetric parameters [1113].

To quantitatively measure blood vessel diameter, the traditional segmentation algorithms, such as adaptive thresholding and Hessian filtering, were firstly adopted to obtain a binary map of blood vessels [1416]. Then, the binary map was skeletonized by a thinning process that iteratively eliminated pixels from the boundaries toward the centerline [17,18]. Finally, the twice distance between the vessel skeleton and vessel boundaries was defined as the vessel diameter. For this, the skeleton of a blood vessel is an important geometric clue for quantifying the vessel diameter. Since the depth-resolved capability is not given by the wide-field imaging technologies (i.e., stereomicroscope), local blood vessels are inevitably shielded by the fascia layers, preventing sluggish or absent blood flow detection in the DSWC model [19,20]. Using the earlier algorithms may result in the fracture of subfascial vessel segmentation and disconnect in the vessel skeleton. To address this concern, we previously used a mean filter of a larger kernel to avoid the fracture of blood vessels in the dynamic threshold algorithm [13]. Unfortunately, the overlarge kernel will reduce segmentation accuracy for small blood vessels. Hence, a robust segmentation algorithm is needed for identifying multiscale vessels and meeting the requirement of skeletonization for subfascial vessels.

Recently, deep learning methods based on U-Net network structure were designed to construct the highly detailed segmentation maps with very limited trading samples, which have been widely adopted to improve the accuracy (Acc), sensitivity (Se), and specificity (Sp) of vessel segmentation [2126]. For instance, Wu et al. used a multiscale network followed network (MS-NFN) model to improve the retinal vessel segmentation accuracy, which used an ‘up-pool’ NFN submodel and a ‘pool-up’ NFN submodel to construct retinal blood vessel segmentation [23]. Li et al. modified residual U-Net (MResU-Net) to segment vessel pixels accurately. An before-activation residual block structure was utilized to improve the performance of U-Net-based methods [24]. Guo et al. utilized the spatial Attention U-Net (SA-UNet) to enhance the ability of retinal blood vessel segmentation [25]. In this model, the spatial attention module was added between the encoder and decoder of the Backbone to ensure the identification of edges and small blood vessels. Compared to the above variants of U-Net, the UNet++ model using the redesigned skip connections and deep supervision has achieved higher performance for semantic and instance segmentation [27]. In this work, we proposed a multi-step deep neural network, named by global attention-Xnet (GA-Xnet) model, consisting of three steps: (1) U-Net Hough was used to ensure better robustness of extraction of a circular region of interest. (2) GA step focuses on utilizing Attention U-Net [28] to learn global features of blood vessel images in the DSWC model based on manually labelling training samples and achieving coarse segmentation. In coarsely segmented images, the trunk of the blood vessel was labelled accurately without supervision. (3) In the Xnet step [27], which refers to UNet++, the mixed samples combining the coarse labelling images and the retinal images of DRIVE datasets [29] were fed into the UNet++ model to optimize the identification ability for the multiscale feature of blood vessels. Compared to the U-Net method, the higher accuracy segmentation images were obtained using the GA-Xnet model, and the continuity of skeletonization for subfascial vessels could be successfully preserved.

2. Methods

2.1 Blood vessels in the DSWC model

Dorsal skinfold chambers (APJ Trading Co., Ventura, CA, USA) were implanted in BALB/c nude mice (male, 25∼30 g, Shanghai SLAC Laboratory Animal Co. Ltd.) using a well-established protocol [19,20,30]. For the V-PDT experiment, RB solution at a dosage of 25 mg/kg body weight (Sigma-Aldrich, St. Louis MO, USA) was intravenously administered to the tail-vein of mice. Immediately after RB administration, a 532 nm semiconductor laser was used as the light source to uniformly irradiate the blood vessels in the DSWC model. An irradiance of 50 mW/cm2 and a total radiant exposure of 30 J/cm2 was employed. The images of the DSWC model were captured before and immediately after V-PDT using a Leica MZ16FA microscope, which is equipped with a DFC300FX camera. The lens system provides 16 × magnification, halogen light source (Leica CLS 150X), and a fiber optic ring illuminator. All animal procedures and experiments were approved by the Institutional Animal Care and Use Committee of the Fujian Normal University.

2.2 Image preprocessing

The flow chart for automatic extraction circular ROI is indicated in Fig. 1. To improve the segmentation efficiency of the blood vessel image, the preprocessing procedures of the image include labeling circular ROI, U-Net model for training, and ROI segmentation by trained model followed by hold padding and image binarization. After this, the edge of the binary image could be identified using a Canny operator, and the automatic extraction circular ROI could be achieved by using the proposed Circle Hough transform [11,31,32].

 figure: Fig. 1.

Fig. 1. Flow chart for automatic extraction circular ROI.

Download Full Size | PDF

2.3 Multi-step deep neural network

The proposed GA-Xnet model, as diagrammed in Fig. 2, was used to segment the blood vessels in the DSWC model in two steps: (1) Extracting the global feature and automatically labeling the large blood vessels by using the Attention U-Net model. (2) Based on transfer learning theory, the mixed sample including both labeled large vessels images and retinal images from DRIVE datasets were randomly partitioned into the block samples of 48 × 48 pixels that containing sufficient feature information of large blood vessels, which was then fed into UNet++ model for training. The trained model segmented multiscale blood vessels (i.e., artery, vein, and capillaries) and accurately identified subfascial vessels.

 figure: Fig. 2.

Fig. 2. The multi-step deep neural network of the GA-Xnet model: Attention U-Net was used in the GA step to achieve coarse segmentation for large blood vessels (marked by the red box). The labeled images of large blood vessels and the DRIVE data retina images were then combined into the UNet++ model in the Xnet step for multiscale feature learning for blood vessels (marked by the blue box).

Download Full Size | PDF

2.3.1 Coarse segmentation

The coarse segmentation of large blood vessels was realized through data preprocessing and model training by the GA step. Image manual labeling and augmentation become the crucial procedures in image preprocessing since the lack of blood vessel datasets for the DSWC model. For this, typical blood vessel images containing subfascial vessels were selected as a training set in this study. The large blood vessels were manually marked with accurate pixel-wise labels, followed by removing non-vascular information. In particular, the subfascial vessels were precisely identified to train the Attention U-Net model for identification. Then, the constructed blood vessel masks were fed into the Attention U-Net model. As shown in Fig. 2, the integrating attention-gate (AGs), denoted by the symbol Ⓐ in the Attention U-Net model, was added for progressively suppressing feature responses in non-vascular regions and ensuring the identification for subfascial vessels. To overcome the issue of limited datasets, the augmenting coarse segmented images were obtained through rotating, shifting, zoom-in/out, and flipping operations [21].

2.3.2 Fine segmentation

The coarse segmentation blood vessel images were firstly recovered as initial size by the nearest neighbor interpolation algorithm [33]. To effectively identify the multiscale blood vessels from large to small, the coarse segmentation images were combined with the retinal images of DRIVE datasets to generate the mixed sample [3436]. In the transductive transfer learning setting, the mixed ratio between coarse segmentation images and DRIVE datasets was carefully evaluated for obtaining the optimal parameters ${\theta ^\ast }$ of empirical risk minimization (ERM), as follows:

$${\theta ^\ast } \approx \mathop {\arg \min }\limits_{\theta \in \Theta } \sum\limits_{i = 1}^{{n_s}} {\frac{{{P_t}({x_{ti}},{y_{ti}})}}{{{P_s}({x_{si}},{y_{si}})}}} \cdot l({x_{si}},{y_{si}},\theta )$$
where $({x_{ti}},{y_{ti}})$ refers to each instance of target domain in coarse segmentation images, while $({x_{si}},{y_{si}})$ refers to each instance of source domain in DRIVE datasets. ${y_{ti}}$ and ${y_{si}}$ is the labeled data of ${x_{ti}}$ and ${x_{si}}$ in task T, respectively. ${P_t}({x_{ti}},{y_{ti}})$ and ${P_s}({x_{si}},{y_{si}})$ is a probabilistic distribution of each instance in target and source domain, respectively. $l({x_{si}},{y_{si}},\theta )$ is a loss function related to the parameter $\theta$. Due to the similar recognition task for the blood vessels in the DSWC model and retinal images, the ${P_t}({x_{ti}},{y_{ti}})$ over ${P_s}({x_{si}},{y_{si}})$ ratio could be obtained using Eq. (2).
$$\frac{{{P_t}({x_{ti}},{y_{ti}})}}{{{P_s}({x_{si}},{y_{si}})}} \approx \frac{{P({x_{si}})}}{{P({x_{ti}})}}$$

In this case, an appropriate mixed proportion of training datasets was determined for the individual instance by estimating $\frac{{P({x_{si}})}}{{P({x_{ti}})}}$. Moreover, the augmentation of sample datasets was achieved through the sliding-window method during training [21]. All augmenting sample datasets were eventually inputted into the UNet++ model, endowed with a stronger ability for multiscale feature learning, as indicated in Fig. 2.

2.4 Evaluation metrics

To quantify the performance of the GA-Xnet model, the Acc, Se, and Sp could be determined by using Eq. (3), respectively [26].

$$\begin{array}{*{20}{c}} {Acc = \frac{{TP + TN}}{{TP + FP + TN + FN}}}\\ {Se = \frac{{TP}}{{TP + FN}}}\\ {Sp = \frac{{TN}}{{TN + FP}}} \end{array}$$
where true positive (TP) and true negative (TN) are correctly identified pixels belonging to blood vessels and background, while false positive (FP) and false-negative (FN) are incorrectly identified pixels belonging to blood vessels and background, respectively.

3. Results and discussion

3.1 Automatic extraction of circular ROI

The process for automatic extraction of circular ROI is illustrated in Fig. 3. For comparison, the same image of Fig. 5(a) in our previous study [13] was re-used as the original image in Fig. 3(a) in this study. Based on the manual labeling images of the DSWC model (n = 6) by one individual, 300 samples of 320×320 pixels were generated through rotating, sifting, zoom-in/out, and flipping. The augmented samples were then inputted into the U-Net network to train the segmentation model of circular ROI. For this, the trained U-Net model was first used to segment irregular circular areas from the original image in Fig. 3(a), and the segmentation image was shown in Fig. 3(b). The binary embodiment of Fig. 3(c) was obtained via the processing of holding padding and image binarization. After this, the edge of the binary image marked with the white dots in Fig. 3(d) was identified using the Canny operator. To exclude the incorrect boundary points, the multiplier between the empirical threshold rate of 0.6 and the highest counts of the Hough space was selected as the threshold value. The boundary points above the threshold value were then sought and labeled with red dots in Fig. 3(d). According to the previous approach [13], the standard circular domain shown in Fig. 3(e) could be obtained by calculating the average center coordinate and the minimum diameter for the prior red dots. The overlay image was illustrated in Fig. 3(f), which was the reconstructed circular region (Fig. 3(e)) overlayed with the original image (Fig. 3(a)). The image of the extracted circular ROI was shown in Fig. 3(g), which was obtained using the dot products between Fig. 3(a) and Fig. 3(e). By comparing Fig. 3(e) and Fig. 3(g), one finds that the U-Net Hough can successfully extract the circular ROI from the image of the DSWC model.

 figure: Fig. 3.

Fig. 3. The process of auto-selected circular ROI using U-Net Hough. (a) Original image; (b) segmentation image; (c) Binary image of an irregular circular region; (d) the edge of the binary image (labeled by the white dots) and the boundary points above a threshold value (marked by the red dots); (e) Image of the reconstructed circular region using Hough transform; (f) Overlay image of the reconstructed circular area and original image; (g) Image of the extracted circular ROI.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a1) (b1) The original images, (a2)-(a4), (a6)-(a8), (b2)-(b4), (b6)-(b8) the edge of binary images (labeled by the white dots) and the boundary points above threshold rate (marked by the red dots) obtained by using Otsu and U-Net with various threshold rates (0.8: red circle; 0.6: blue circle; 0.3: yellow circle), respectively. (a5) (a9) (b5) (b9) Auto-selected circular ROIs overlapped with the original image achieved by using Otsu Hough and U-Net Hough, respectively (n = 2).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Original images and corresponding labeled vascular binary images as training samples in GA step (n = 6).

Download Full Size | PDF

To further evaluate the robustness of U-Net Hough, Otsu Hough was adopted for the comparative study [13]. Due to the non-uniform reflected light intensity of the metal frame, the false edges (labeled with white dots) around the metal frame were extracted by using the Otsu and Canny operators in Figs. 4(a2), 4(a3), 4(a4) and Figs. 4(b2), 4(b3), 4(b4), respectively. Significantly different distribution was observed between the boundary points (labeled with red dots) above the threshold value for various threshold rates of 0.3, 0.6, and 0.8. As shown in Figs. 4(a5) and 4(b5), the inconsistent circular ROIs were obtained with Otsu Hough. The red circular ROI in Figs. 4(a5) and the yellow circular ROI in Figs. 4(b5) were mismatched compared to the other ROIs, respectively. Compared to the Otsu Hough, the precise edges (labeled with white dots) around the metal frame were observed by using U-Net and Canny operator in Figs. 4(a6), 4(a7), 4(a8) and Figs. 4(b6), 4(b7), 4(b8), respectively. Interestingly, the boundary points (labeled with red dots) above the threshold value indicate a similar distribution for various threshold rates of 0.3, 0.6, and 0.8. The consistent circular ROIs were obtained by U-Net Hough, as shown in Figs. 4(a9) and 4(b9).

The Acc, Se, and Sp could be further calculated according to Eq. (3). In this case, the TP and TN are correctly identified pixels that belong to the circular ROI and background. In contrast, FP and FN are incorrectly identified pixels that belong to circular ROI and background, respectively. For this, the performance for automatic extraction of circular ROIs (n = 10, data not shown) were evaluated by using the manual labeling circular ROI as standards. As illustrated in Table 1, The Acc, Se and Sp are 94.47 ± 0.02%, 86.30 ± 0.02%, and 99.99 ± 0.01% for U-Net Hough, which are higher than those obtained by Otsu Hough. This finding reveals that U-Net Hough provides better robustness than Otsu Hough.

Tables Icon

Table 1. Performance of U-Net Hough and Otsu Hough

3.2 Coarse segmentation of large vessels

To ensure the training efficacy of the GA step, vascular images containing subfascial vessels in the DSWC model were selected as manually labeled samples, illustrated in Fig. 5. The large-scale blood vessels in the original images were individually marked as black pixels by one person using Microsoft drawing software. The black pixels were subsequently identified to reconstruct the labeled vascular binary images. It could be seen that the fractured parts of subfascial vessels were manually retrieved during the labeling process. After this, the labeled images were augmented through rotating, sifting, zoom-in/out, and flipping.

To maintain the global feature of large blood vessels and to achieve specialized training on subfascial vessels during the training process, the entire blood vessel images were used as training objectives. Due to the hardware memory limitation of GeForce Quadro M4000 GPU, the original blood vessel images of 1040×1392 pixels were firstly downsampled to 320×320 pixels, which were fed into the GA step for training. The typical original images of blood vessels (n = 2) in the DSWC model are shown in Figs. 6(a1) and 6(a2). The dynamic threshold algorithm (C = 3, N = 35) [13] and the block image training method were chosen to compare coarse segmentation images. As compared to the segmented images in Figs. 6(b1) and 6(b2), and in Figs. 6(c1) and 6(c2), the segmented images in Figs. 6(d1) and 6(d2) have a better performance in dealing with coarse segmentation of large blood vessels. Furthermore, the fracture of subfascial vessel segmentation cannot be effectively restored by the dynamic threshold algorithm and block image training method, as indicated by the yellow arrows in enlarged sub-images of Figs. 6(b2) and 6(c2). Meanwhile, the portion fascia was mistakenly identified as large vessels, as indicated by the yellow arrows in the enlarged sub-images of Figs. 6(b1) and 6(c1). By contrast, the entire image training method could avoid the mistake of dynamic threshold algorithm and block image training, and effectively identify the subfascial vessels, as indicated by yellow arrows in the enlarged sub-images of Figs. 6(d1) and (d2). Therefore, the entire image training method was chosen to achieve the coarse segmentation of large vessels.

 figure: Fig. 6.

Fig. 6. (a) Original images and segmentation images of blood vessels achieved by using (b) dynamic threshold algorithm (C = 3, N = 35), (c) block image training, and (d) entire image training, respectively (n = 2). The yellow arrows in the enlarged sub-images indicate the superficial fascia in the original images and segmentation images using the three methods above.

Download Full Size | PDF

3.3 Fine segmentation of small vessels

The DRIVE datasets comprise color fundus images acquired using a Canon CR5 non-mydriatic 3CCD camera. Due to the similar image collection manner and feature of small vessels, the DRIVE datasets were selected to construct the total training samples by combining with the coarse segmentation images of blood vessels from the GA step (n = 40) with various mixing ratio. According to Eq. (2), the mixing ratios were set as 2:1, 1:1, 1:2, and 1:5 based on the assumption that the proportion of small blood vessels is in the range of 30%∼80% in the DSWC model. The total training samples were then inputted into the Xnet step of the GA-Xnet model to generate the probability multiscale blood vessels maps. In Xnet step, the total training samples were randomly split into training set 90% and validation set 10%. Meanwhile, two manually labeled multiscale vascular images in the DSWC model were used as the testing set. For both training and inference, the size of each block was set as 48 × 48 pixels. As shown in Figs. 7(a)- (d), the small blood vessels in the enlarged sub-images could be clearly identified for various mixing ratios. In addition, the Acc, Se, and Sp were obtained for further evaluating the performance of different mixing ratios, respectively. Before calculating the Acc, Se, and Sp, the binary maps of multiscale blood vessel were firstly obtained through binarization, and then the non-vascular isolated points were further eliminated by using a region descriptor. As represented in Fig. 7(e), the highest Se (88.87%) is achieved with the mixing ratio of 1:5, while the highest Acc (96.40%) and Sp (96.89%) are obtained with the mixing ratio of 1:2. Compared to the mixing ratios of 2:1 and 1:1, the enlarged sub-images of the lower left corner in Fig. 7(c) and 7(d) indicate that the undesired holes within the large vessels could be generated with the mixing ratios of 1:2 and 1:5, respectively. The reason may be that the trained Xnet model is more sensitive to small blood vessels and noise with the increased proportion of retinal vessels. As indicated in Fig. 7, the most satisfying results can be achieved with the mixing ratio of 1:1, which could provide the higher value of Acc (96.00%), Se (86.27%), Sp (96.47%) and effectively prevent the generation of undesired holes in large blood vessels. Moreover, these values are superior to the values of Acc (90.64%), Se (80.12%), Sp (92.83%) that derived by using the U-Net model [13]. Subsequently, the optimal mixing ratio was set as 1:1 for fine segmentation of multiscale blood vessels.

 figure: Fig. 7.

Fig. 7. Comparison of probability multiscale blood vessel maps with various mixing ratios between coarsely segmented blood vessel images and retinal images: (a) 2:1 (b) 1:1 (c) 1:2 (d) 1:5, and (e) the corresponding performance of Acc, Se, and Sp.

Download Full Size | PDF

In addition, we further compared the performance of GA-Xnet with other existing methods currently used in the retinal vessel segmentation task. As represented in Table 2 the training sets include retinal DRIVE dataset and vascular image of the DSWC model, and the mixing ratio was set as 1:1. It could be seen that the GA-Xnet has the best performance of Se (86.27%), while the Acc and Sp were comparable to that of U-Net, UNet++, and Attention U-Net models, respectively. Compared to the values of Acc (90.64), Se (80.12%), Sp (92.83%) in our previous study [13], the higher Acc (96.45%), Se (82.47%), and Sp (97.13%) were obtained by using the region descriptors to remove the non-vascular isolated points. For this, the segmentation performance of subfascial vessels was further evaluated for the U-Net and GA-Xnet models, respectively.

Tables Icon

Table 2. Performance of GA-Xnet and other methods

3.4 Segmentation images for pre- and post- V-PDT

The original ROI images, probability blood vessel maps of a vascular image, vascular skeletons, and overlay images of vascular skeleton and original image in the DSWC model are indicated in Fig. 8. For the U-Net model, the subfascial vessels can be accurately identified only for post-V-PDT, as marked by the white arrow in the fourth column of Fig. 8(b2). In this model, the subfascial vessels marked by the yellow arrows for both pre-and post- V-PDT and the subfascial vessels marked by the white arrows for pre-V-PDT cannot be identified. As a result, the unrecognizing subfascial vessels for pre-V-PDT or post-V-PDT will undoubtedly result in a calculation error for vasoconstriction. Fortunately, this potential error could be avoided through the accurate identification of subfascial vessels by the proposed GA-Xnet model, as shown in Fig. 8(c2). The subfascial vessels derived from the shield of superficial fascia for both pre-and post- V-PDT could be successfully recognized and recovered, as marked by the yellow and white arrows for all 3 cases.

 figure: Fig. 8.

Fig. 8. (a1) The original ROI images and (a2) the corresponding enlarged sub-images marked by the red rectangle box. (b1) (c1) Probability blood vessel maps of vascular images in the DSWC model for pre-and post- V-PDT obtained by using U-Net and GA-Xnet model, respectively. (b2) (c2) The enlarged sub-images for the probability blood vessel maps marked by the red rectangle box. (b3) (c3) The vascular skeletons are processed by using a skeleton extraction algorithm for (b2) and (c2), respectively. (b4) (c4) The overlay images of vascular skeleton and the original image. The yellow and white arrows indicate the superficial fascia in the original and processed images, respectively (n = 3).

Download Full Size | PDF

Furthermore, the enlarged sub-images in Figs. 8(b2) and 8(c2) were further processed by using the skeleton extraction algorithm [17,18]. Compared to the U-Net model, the skeleton continuity of subfascial vessels was well preserved using the GA-Xnet model, as marked by the yellow and white arrows in Figs. 8(c3) and 8(c4). This finding implies that the GA-Xnet model could accurately quantify the blood vessel diameter (i.e., vasoconstriction).

4. Conclusion

In conclusion, the GA-Xnet model was proposed to precisely segment subfascial vessels in the DSWC model. The data show that the Acc, Se, and Sp for the segmentation of multiscale vessels in the DSWC model are 96.00%, 86.27%, 96.47%, respectively. As a result, the subfascial vessels could be accurately identified, and the connectedness of the vessel skeleton is well preserved. Moreover, the potential error for vasoconstriction due to the unrecognized subfascial vessels pre-V-PDT can be avoided by the proposed GA-Xnet model. These findings suggest that the proposed multi-step deep neural network helps evaluate the short-term vascular responses in V-PDT. In contrast, the feasibility of long-term responses should be further assessed.

Funding

National Natural Science Foundation of China (61635014, 61805040, 61935004); Natural Science Foundation of Fujian Province (2019J05061, 2019Y4004).

Acknowledgments

We gratefully acknowledge Dr. Lothar Lilge from the University of Toronto for his insightful discussion and assistance in proofreading.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. R. Azzouzi, S. Vincendeau, E. Barret, A. Cicco, F. Kleinclauss, H. G. van der Poel, C. G. Stief, J. Rassweiler, G. Salomon, E. Solsona, A. Alcaraz, T. T. Tammela, D. J. Rosario, F. Gomez-Veiga, G. Ahlgren, F. Benzaghou, B. Gaillac, B. Amzal, F. M. Debruyne, G. Fromont, C. Gratzke, M. Emberton, and PCM301 Study Group, “Padeliporfin vascular-targeted photodynamic therapy versus active surveillance in men with low-risk prostate cancer (CLIN1001 PCM301): an open-label, phase 3, randomised controlled trial,” Lancet Oncol. 18(2), 181–191(2017). [CrossRef]  

2. S. J. Bakri, J. E. Thorne, A. C. Ho, J. P. Ehlers, S. D. Schoenberger, S. Yeh, and S. J. Kim, “Safety and efficacy of anti-vascular endothelial growth factor therapies for neovascular age-related macular degeneration: a report by the American academy of Ophthalmology,” Ophthalmology 126(1), 55–63 (2019). [CrossRef]  

3. M. Zhang, Q. Wu, T. Lin, L. Guo, Y. Ge, R. Zeng, Y. Yang, H. Rong, G. Jia, Y. Huang, J. Fang, H. Shi, W. Zhao, S. Chen, and P. Cai, “Hematoporphyrin monomethyl ether photodynamic therapy for the treatment of facial port-wine stains resistant to pulsed dye laser,” Photodiagnosis Photodyn. Ther. 31, 101820 (2020). [CrossRef]  

4. D. Chen, J. Ren, Y. Wang, H. Zhao, B. Li, and Y. Gu, “Relationship between the blood perfusion values determined by laser speckle imaging and laser Doppler imaging in normal skin and port wine stains,” Photodiagn Photodyn Ther. 13, 1–9 (2016). [CrossRef]  

5. S. Cavin, T. Riedel, P. Rosskopfova, M. Gonzalez, G. Baldini, M. Zellweger, G. Wagnieres, P. J. Dyson, H. B. Ris, T. Krueger, and J. Y. Perentes, “Vascular-targeted low dose photodynamic therapy stabilizes tumor vessels by modulating pericyte contractility,” Lasers Surg. Med. 51(6), 550–561 (2019). [CrossRef]  

6. T. T. Mai, S. W. Yoo, S. Park, J. Y. Kim, K. H. Choi, C. Kim, S. Y. Kwon, J. J. Min, and C. Lee, “In vivo quantitative vasculature segmentation and assessment for photodynamic therapy process monitoring using photoacoustic microscopy,” Sensors 21(5), 1776 (2021). [CrossRef]  

7. L. Lin, H. Lin, Y. Shen, D. Chen, Y. Gu, B. C. Wilson, and B. Li, “Singlet oxygen luminescence image in blood vessels during vascular-targeted photodynamic therapy,” Photochem Photobiol. 96(3), 646–651 (2020). [CrossRef]  

8. S. Kwiatkowski, B. Knap, D. Przystupski, J. Saczko, E. Kedzierska, K. Knap-Czop, J. Kotlinska, O. Michel, K. Kotowski, and J. Kulbacka, “Photodynamic therapy - mechanisms, photosensitizers and combinations,” Biomed Pharmacother. 106, 1098–1107 (2018). [CrossRef]  

9. L. S. Sampaio, S. R. de Annunzio, L. M. de Freitas, L. O. Dantas, L. de Boni, M. C. Donatoni, K. T. de Oliveira, and C. R. Fontana, “Influence of light intensity and irradiation mode on methylene blue, chlorin-e6 and curcumin-mediated photodynamic therapy against Enterococcus faecalis,” Photodiagnosis Photodyn. Ther. 31, 101925 (2020). [CrossRef]  

10. H. Wang, Z. Wang, Y. Li, T. Xu, Q. Zhang, M. Yang, P. Wang, and Y. Gu, “A novel theranostic nanoprobe for in vivo singlet oxygen detection and real-time dose-effect relationship monitoring in photodynamic therapy,” Small. 15(39), 1902185 (2019). [CrossRef]  

11. K. Haedicke, L. Agemy, M. Omar, A. Berezhnoi, S. Roberts, C. Longo-Machado, M. Skubal, K. Nagar, H. T. Hsu, K. Kim, T. Reiner, J. Coleman, V. Ntziachristos, A. Scherz, and J. Grimm, “High-resolution optoacoustic imaging of tissue responses to vascular-targeted therapies,” Nat Biomed Eng. 4(3), 286–297 (2020). [CrossRef]  

12. D. Chen, W. Yuan, H. C. Park, and X. Li, “In vivo assessment of vascular-targeted photodynamic therapy effects on tumor microvasculature using ultrahigh-resolution functional optical coherence tomography,” Biomed. Opt. Express 11(8), 4316–4325 (2020). [CrossRef]  

13. X. Xu, L. Lin, and B. Li, “Automatic protocol for quantifying the vasoconstriction in blood vessel images,” Biomed. Opt. Express 11(4), 2122–2136 (2020). [CrossRef]  

14. X. Jiang and D. Mojon, “Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images,” IEEE T. Pattern Anal. 25(1), 131–137 (2003). [CrossRef]  

15. R. F. Frangi, W. J. Niessen, K. J. Vincken, and M. A. Viergever, “Multiscale vessel enhancement filtering,” International Conference on Medical Image Computing & Computer-assisted Intervention (1998), pp. 130–137.

16. F. Orujov, R. Maskeliūnas, R. Damaševičius, and W. Wei, “Fuzzy based image edge detection algorithm for blood vessel detection in retinal images,” Appl Soft Comput 94, 106452 (2020). [CrossRef]  

17. Q. Zhao, R. Lin, C. Liu, J. Zhao, G. Si, L. Song, and J. Meng, “Quantitative analysis on in vivo tumor-microvascular images from optical-resolution photoacoustic microscopy,” J. Biophotonics 12(6), e201800421 (2019). [CrossRef]  

18. W. Wei, Q. Zhang, S. G. Rayner, W. Qin, Y. Cheng, F. Wang, Y. Zheng, and R. K. Wang, “Automated vessel diameter quantification and vessel tracing for OCT angiography,” J. Biophotonics 13(12), e202000248 (2020). [CrossRef]  

19. E. F. J. Meijer, H. S. Jeong, E. R. Pereira, T. A. Ruggieri, C. Blatter, B. J. Vakoc, and T. P. Padera, “Murine chronic lymph node window for longitudinal intravital lymph node imaging,” Nat Protoc. 12(8), 1513–1520 (2017). [CrossRef]  

20. G. M. Palmer, A. N. Fontanella, S. Shan, G. Hanna, G. Zhang, C. L. Fraser, and M. D. Dewhirst, “In vivo optical molecular imaging and analysis in mice using dorsal window chamber models applied to hypoxia, vasculature and fluorescent reporters,” Nat Protoc. 6(9), 1355–1366 (2011). [CrossRef]  

21. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241 .

22. L. Liu, J. Cheng, Q. Quan, F. Wu, Y. Wang, and J. Wang, “A survey on U-shaped networks in medical image segmentations,” Neurocomputing 409, 244–258 (2020). [CrossRef]  

23. Y. Wu, Y. Xia, Y. Song, Y. Zhang, and W. Cai, “Multiscale network followed network model for retinal vessel segmentation,” Medical Image Computing and Computer Assisted Intervention 11071, 119–126 (2018). [CrossRef]  

24. D. Li and S. Rahardja, “BSEResU-Net: An attention-based before-activation residual U-Net for retinal vessel segmentation,” Comput. Meth. Prog. Biomed. 205, 106070 (2021). [CrossRef]  

25. J. Hu, H. Wang, J. Wang, Y. Wang, F. He, and J. Zhang, “SA-Net: A scale-attention network for medical image segmentation,” PLoS ONE 16(4), e0247388 (2021). [CrossRef]  

26. S. Moccia, E. De Momi, S. E. I. Hadji, and L. S. Mattos, “Blood vessel segmentation algorithms - review of methods, datasets and evaluation metrics,” Comput. Meth. Prog. Biomed. 158, 71–91 (2018). [CrossRef]  

27. Z. Zhou, M. M. R. Siddiquee, and N. Tajbakhsh, “UNet++: Redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020). [CrossRef]  

28. O. Oktay, J. Schlemper, and L. L. Folgoc, “Attention U-Net: Learning where to look for the pancreas,” 1st Conference on Medical Imaging with Deep Learning (2018), pp. 1–10.

29. DRIVE, Digital Retinal Images for Vessel Extraction, http://www.isi.uu.nl/Research/Databases/DRIVE/ (2004).

30. M. W. Laschke, B. Vollmar, and M. D. Menger, “The dorsal skinfold chamber: window into the dynamic interaction of biomaterials with their surrounding host tissue,” Eur. Cells Mater. 22(7), 147–167 (2011). [CrossRef]  

31. H. Yuen, J. Princen, J. Illingworth, and J. Kittler, “A comparative study of Hough transform methods for circle finding,” Image Vision Comput. 8(1), 71–77 (1990). [CrossRef]  

32. A. O. Djekoune, K. Messaoudi, and K. Amara, “Incremental circle Hough transform: an improved method for circle detection,” Optik 133, 17–31 (2017). [CrossRef]  

33. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Prentice Hall, 2002).

34. S. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010). [CrossRef]  

35. H. Zhang, M. Cisse, Y. Dauphin, and D. Lopez-Paz, “Mixup: Beyond empirical risk minimization,” Tenth International Conference on Learning Representations (ICLR) (2018), p. 528.

36. V. N. Vapnik, Statistical Learning Theory (Wiley-Interscience, 1998).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Flow chart for automatic extraction circular ROI.
Fig. 2.
Fig. 2. The multi-step deep neural network of the GA-Xnet model: Attention U-Net was used in the GA step to achieve coarse segmentation for large blood vessels (marked by the red box). The labeled images of large blood vessels and the DRIVE data retina images were then combined into the UNet++ model in the Xnet step for multiscale feature learning for blood vessels (marked by the blue box).
Fig. 3.
Fig. 3. The process of auto-selected circular ROI using U-Net Hough. (a) Original image; (b) segmentation image; (c) Binary image of an irregular circular region; (d) the edge of the binary image (labeled by the white dots) and the boundary points above a threshold value (marked by the red dots); (e) Image of the reconstructed circular region using Hough transform; (f) Overlay image of the reconstructed circular area and original image; (g) Image of the extracted circular ROI.
Fig. 4.
Fig. 4. (a1) (b1) The original images, (a2)-(a4), (a6)-(a8), (b2)-(b4), (b6)-(b8) the edge of binary images (labeled by the white dots) and the boundary points above threshold rate (marked by the red dots) obtained by using Otsu and U-Net with various threshold rates (0.8: red circle; 0.6: blue circle; 0.3: yellow circle), respectively. (a5) (a9) (b5) (b9) Auto-selected circular ROIs overlapped with the original image achieved by using Otsu Hough and U-Net Hough, respectively (n = 2).
Fig. 5.
Fig. 5. Original images and corresponding labeled vascular binary images as training samples in GA step (n = 6).
Fig. 6.
Fig. 6. (a) Original images and segmentation images of blood vessels achieved by using (b) dynamic threshold algorithm (C = 3, N = 35), (c) block image training, and (d) entire image training, respectively (n = 2). The yellow arrows in the enlarged sub-images indicate the superficial fascia in the original images and segmentation images using the three methods above.
Fig. 7.
Fig. 7. Comparison of probability multiscale blood vessel maps with various mixing ratios between coarsely segmented blood vessel images and retinal images: (a) 2:1 (b) 1:1 (c) 1:2 (d) 1:5, and (e) the corresponding performance of Acc, Se, and Sp.
Fig. 8.
Fig. 8. (a1) The original ROI images and (a2) the corresponding enlarged sub-images marked by the red rectangle box. (b1) (c1) Probability blood vessel maps of vascular images in the DSWC model for pre-and post- V-PDT obtained by using U-Net and GA-Xnet model, respectively. (b2) (c2) The enlarged sub-images for the probability blood vessel maps marked by the red rectangle box. (b3) (c3) The vascular skeletons are processed by using a skeleton extraction algorithm for (b2) and (c2), respectively. (b4) (c4) The overlay images of vascular skeleton and the original image. The yellow and white arrows indicate the superficial fascia in the original and processed images, respectively (n = 3).

Tables (2)

Tables Icon

Table 1. Performance of U-Net Hough and Otsu Hough

Tables Icon

Table 2. Performance of GA-Xnet and other methods

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

θ arg min θ Θ i = 1 n s P t ( x t i , y t i ) P s ( x s i , y s i ) l ( x s i , y s i , θ )
P t ( x t i , y t i ) P s ( x s i , y s i ) P ( x s i ) P ( x t i )
A c c = T P + T N T P + F P + T N + F N S e = T P T P + F N S p = T N T N + F P
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.