Abstract
The construction of robust object-oriented depth maps is fundamental to understanding the topography and motion of objects located within a given terrain. A computational vision system that is particularly novel in its approach has been developed; it creates object maps by using algorithms based on biological models. The success of this method is demonstrated by the speed and robustness of the results when the input consists of natural outdoor scenes, where the effects of terrain, shadows, scene illumination, reference landmarks, and scene complexity can be systematically explored. The performance of a dynamic object-oriented computational vision system based on the layered neural network architecture used for primate depth perception is presented. The cortical architecture indicated by studies in neurobiology, encompassing multiple areas of the brain, and its embodiment in the computational vision system, is described. The role of visual landmarks, multiresolution texture, shape from shading, boundary completion, region content filling, motion parallax, structure from motion, occlusion information, and Hebbian learning in human and robotic vision is discussed.
© 1991 Optical Society of America
PDF ArticleMore Like This
Sherry A. Bergmann, Troy R. Norin, and Mark O. Freeman
ThL5 OSA Annual Meeting (FIO) 1991
Alex Pentland
TuKK1 OSA Annual Meeting (FIO) 1991
Yih-Shyang Cheng
FA8 OSA Annual Meeting (FIO) 1991