Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Low-light image enhancement based on Retinex-Net with color restoration

Not Accessible

Your library or personal account may give you access

Abstract

Low-light images often suffer from a variety of degradation problems such as loss of detail, color distortions, and prominent noise. In this paper, the Retinex-Net model and loss function with color restoration are proposed to reduce color distortion in low-light image enhancement. The model trains the decom-net and color recovery-net to achieve decomposition of low-light images and color restoration of reflected images, respectively. First, a convolutional neural network and the designed loss functions are used in the decom-net to decompose the low-light image pair into an optimal reflection image and illumination image as the input of the network, and the reflection image after normal light decomposition is taken as the label. Then, an end-to-end color recovery network with a simplified model and time complexity is learned and combined with the color recovery loss function to obtain the correction reflection map with higher perception quality, and gamma correction is applied to the decomposed illumination image. Finally, the corrected reflection image and the illumination image are synthesized to get the enhanced image. The experimental results show that the proposed network model has lower brightness-order-error (LOE) and natural image quality evaluator (NIQE) values, and the average LOE and NIQE values of the low-light dataset images can be reduced to 942 and 6.42, respectively, which significantly improves image quality compared with other low-light enhancement methods. Generally, our proposed method can effectively improve image illuminance and restore color information in the end-to-end learning process of low-light images.

© 2023 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Double-function enhancement algorithm for low-illumination images based on retinex theory

Liwei Chen, Yanyan Liu, Guoning Li, Jintao Hong, Jin Li, and Jiantao Peng
J. Opt. Soc. Am. A 40(2) 316-325 (2023)

Joint Retinex-based variational model and CLAHE-in-CIELUV for enhancement of low-quality color retinal images

Zongheng Huang, Chen Tang, Min Xu, and Zhenkun Lei
Appl. Opt. 59(28) 8628-8637 (2020)

CODEN: combined optimization-based decomposition and learning-based enhancement network for Retinex-based brightness and contrast enhancement

Sangjae Ahn, Joongchol Shin, Heunseung Lim, Jaehee Lee, and Joonki Paik
Opt. Express 30(13) 23608-23621 (2022)

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (7)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (3)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (6)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.