Abstract
Lowering the excitation to reduce phototoxicity and photobleaching while numerically enhancing the fluorescence signal is a useful way to support long-term observation in fluorescence microscopy. However, invalid features, such as near-zero gradient dark backgrounds in fluorescence images, negatively affect the neural networks due to the network training locality. This problem makes it difficult for mature deep learning-based image enhancement methods to be directly extended to fluorescence imaging enhancement. To reduce the negative optimization effect, we previously designed Kindred-Nets in conjunction with a mixed fine-tuning scheme, but the mapping learned from the fine-tuning dataset may not fully apply to fluorescence images. In this work, we proposed a new, to the best of our knowledge, deep low-excitation fluorescence imaging global enhancement framework, named Deep-Gamma, that is completely different from our previously designed scheme. It contains GammaAtt, a self-attention module that calculates the attention weights from global features, thus avoiding negative optimization. Besides, in contrast to the classical self-attention module outputting multidimensional attention matrices, our proposed GammaAtt output, as multiple parameters, significantly reduces the optimization difficulty and thus supports easy convergence based on a small-scale fluorescence microscopy dataset. As proven by both simulations and experiments, Deep-Gamma can provide higher-quality fluorescence-enhanced images compared to other state-of-the-art methods. Deep-Gamma is envisioned as a future deep low-excitation fluorescence imaging enhancement modality with significant potential in medical imaging applications. This work is open source and available at https://github.com/ZhiboXiao/Deep-Gamma.
© 2023 Optica Publishing Group
Full Article | PDF ArticleMore Like This
Yuanjie Gu, Zhibo Xiao, Wei Hou, Cheng Liu, Ying Jin, and Shouyu Wang
Opt. Lett. 47(16) 4175-4178 (2022)
Lejia Hu, Shuwen Hu, Wei Gong, and Ke Si
Opt. Lett. 46(9) 2055-2058 (2021)
Hang Zhou, Yuxin Li, Bolun Chen, Hao Yang, Maoyang Zou, Wu Wen, Yayu Ma, and Min Chen
Opt. Lett. 48(23) 6300-6303 (2023)