Abstract
The central task in image understanding is to determine scene properties from image properties. This is difficult because the problem, formally posed, is underconstrained. Methods that infer scene properties from image properties make assumptions about how the world determines what we see. In remote sensing, some of these assumptions can be dealt with explicitly. Available scene knowledge, in the form of a digital terrain model and a ground cover map, is used to synthesize an image for a given date and time. The scene radiance equation used assumes that the multispectral bidirectional reflectance distribution function of the surface is separable and is based on simple models of direct sun illumination, diffuse sky illumination, and atmospheric path radiance. Synthesis predicts how the surface will look. Unknown parameters of the model are estimated from the real image. The process iterates since comparison of the real and synthetic images contributes to an emerging description of the particular scene in view. A statistical comparison of the real image and the synthetic image is used to judge how well the model represents the mapping from scene to image. The image itself becomes the unifying representation to compare the scene against what is seen.
© 1985 Optical Society of America
PDF ArticleMore Like This
H. Harlyn Baker, Robert C. Bolles, and David H. Marimont
WB1 Machine Vision (MV) 1987
C. H. Whitlock, J. T. Suttles, B. R. Barkstrom, and S. R. LeCroy
TuC12 Optical Remote Sensing (HISE) 1985
David J. Diner and John V. Martonchik
WC20 Optical Remote Sensing (HISE) 1985