Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive fusion of human visual sensitive features for surveillance video summarization

Not Accessible

Your library or personal account may give you access

Abstract

Surveillance video cameras capture large amounts of continuous video streams every day. To analyze or investigate any significant events, it is a laborious and boring job to identify these events from the huge video data if it is done manually. Existing approaches sometimes neglect key frames with significant visual contents and/or select some unimportant frames with low/no activity. To solve this problem, in this paper, a video summarization technique is proposed by combining three multimodal human visual sensitive features, such as foreground objects, motion information, and visual saliency. In a video stream, foreground objects are one of the most important pieces of a video as they contain more detailed information and play a major role in important events. Moreover, motion is another stimulus of a video that significantly attracts human visual attention. To obtain this, motion information is calculated in the spatial domain as well as the frequency domain. Spatial motion information can select object motion accurately; however, it is sensitive to illumination changes. On the other hand, frequency motion information is robust to illumination change, although it is easily affected by noise. Therefore, motion information in both the spatial and the frequency domains is employed. Furthermore, the visual attention cue is a sensitive feature to measure the indication of a user’s attraction label for determining key frames. As these features individually cannot perform very well, they are combined to obtain better results. For this purpose, an adaptive linear weighted fusion scheme is proposed to combine the features to rank video frames for summarization. Experimental results reveal that the proposed method outperforms the state-of-the-art methods.

© 2017 Optical Society of America

Full Article  |  PDF Article
More Like This
Toward adaptive fusion of multiple cues for salient region detection

Hong Li, Enhua Wu, and Wen Wu
J. Opt. Soc. Am. A 33(12) 2365-2375 (2016)

Salient object detection using coarse-to-fine processing

Qiangqiang Zhou, Lin Zhang, Weidong Zhao, Xianhui Liu, Yufei Chen, and Zhicheng Wang
J. Opt. Soc. Am. A 34(3) 370-383 (2017)

Saliency of color image derivatives: a comparison between computational models and human perception

Eduard Vazquez, Theo Gevers, Marcel Lucassen, Joost van de Weijer, and Ramon Baldrich
J. Opt. Soc. Am. A 27(3) 613-621 (2010)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (14)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (2)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (15)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.