Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

ASA-BiSeNet: improved real-time approach for road lane semantic segmentation of low-light autonomous driving road scenes

Not Accessible

Your library or personal account may give you access

Abstract

The solution to the problem of road environmental perception is one of the essential prerequisites to realizing the autonomous driving of intelligent vehicles, and road lane detection plays a crucial role in road environmental perception. However, road lane detection in complex road scenes is challenging due to poor illumination conditions, the occlusion of other objects, and the influence of unrelated road markings. It also hinders the commercial application of autonomous driving technology in various road scenes. In order to minimize the impact of illumination factors on road lane detection tasks, researchers use deep learning (DL) technology to enhance low-light images. In this study, road lane detection is regarded as an image segmentation problem, and road lane detection is studied based on the DL approach to meet the challenge of rapid environmental changes during driving. First, the Zero-DCE++ approach is used to enhance the video frame of the road scene under low-light conditions. Then, based on the bilateral segmentation network (BiSeNet) approach, the approach of associate self-attention with BiSeNet (ASA-BiSeNet) integrating two attention mechanisms is designed to improve the road lane detection ability. Finally, the ASA-BiSeNet approach is trained based on the self-made road lane dataset for the road lane detection task. At the same time, the approach based on the BiSeNet approach is compared with the ASA-BiSeNet approach. The experimental results show that the frames per second (FPS) of the ASA-BiSeNet approach is about 152.5 FPS, and its mean intersection over union is 71.39%, which can meet the requirements of real-time autonomous driving.

© 2023 Optica Publishing Group

Full Article  |  PDF Article
More Like This
MAFFNet: real-time multi-level attention feature fusion network with RGB-D semantic segmentation for autonomous driving

Tongfei Lv, Yu Zhang, Lin Luo, and Xiaorong Gao
Appl. Opt. 61(9) 2219-2229 (2022)

Robustifying semantic cognition of traversability across wearable RGB-depth cameras

Kailun Yang, Luis M. Bergasa, Eduardo Romera, and Kaiwei Wang
Appl. Opt. 58(12) 3141-3155 (2019)

Polarization-driven semantic segmentation via efficient attention-bridged fusion

Kaite Xiang, Kailun Yang, and Kaiwei Wang
Opt. Express 29(4) 4802-4820 (2021)

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (9)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (6)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (5)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.