Abstract
The two dimensional Gabor transform has been proposed as a model of the visual receptive field of the simple cells of mammalian striate cortex. Such a representation can make image features such as localized spatial frequencies and edge orientations explicit while achieving data compression. This paper analyzes the computational steps needed to achieve this transformation and presents typical execution times for several computers. The use of an adaptive iteration step size is shown to improve performance significantly. Computational requirements are examined for both sequential (SISD) and parallel (SIMD) computers. For an n × n image a SISD machine requires O(n2 log n) operations; SIMD computations are bounded by O(n log n), O(log n2), or O(log n), depending on the machine architecture. A parallel computer with global addition capability and with a processor for each pixel could perform a Gabor transform in near real-time. On existing computers near real time performance is not possible for moderate sized images. Sequential computers show better cost/performance ratios than parallel machines.
© 1992 Optical Society of America
PDF ArticleMore Like This
N. A. Kaliteevskij, V. E. Semenov, V. D. Glezer, and V. E. Gauselman
CThI76 The European Conference on Lasers and Electro-Optics (CLEO/Europe) 1994
Jesús Pineda, Jhacson Meza, Andrés G. Marrugo, Raúl Vargas, and Lenny A. Romero
Th4A.9 Latin America Optics and Photonics Conference (LAOP) 2018
V. Magni and S. De Silvestri
PTu119 International Quantum Electronics Conference (IQEC) 1992