Abstract
An image is a perceived, or measured, representation of the actual scene and depends on coordinate transformations. A set of proper coordinate transformations which provides a complete 4-D linear superposition of every coordinate component can be decomposed into scale, rotation, translation, and velocity transformations. For the case of small continuous incremental changes, a general transformation equation is derived. It is a basic transformation which applies to many physical situations. The transformation is applied to dynamic imagery from a single missile-mounted camera, to static imagery from stereo cameras, and to a rotating surveillance camera. It is shown that mappings are generated which display the range to every scene point, the rotation and velocity at every point, and maps of the centers of rotation. Camera motion yields global maps of these parameters while intrinsic scene variations yield local maps. The issue of extraction of these parameter maps is addressed. It leads to the statement of a requirement for a new adaptive training algorithm. If such an algorithm can be developed, neural network architectures could be applied to implement a preprocessor for image feature extraction, where the features are range, rotation, velocity, and translation of scene element.
© 1989 Optical Society of America
PDF ArticleMore Like This
Umesh R. Dhond and J. K. Aggarwal
TuC1 Image Understanding and Machine Vision (IUMV) 1989
Jeffrey J. Rodríguez and J. K. Aggarwal
MC1 Image Understanding and Machine Vision (IUMV) 1989
Bikash Sabata, N. Nandhakumar, and J.K. Aggarwal
TuA1 Image Understanding and Machine Vision (IUMV) 1989