Abstract
Artificial neural networks have gained significant popularity over the past several years in a wide variety of engineering applications. This popularity is due in part to the ability of a neural network that is trained using a supervised training rule such as error backpropagation to acquire a nonparametric representation of the mapping between a set of inputs and outputs without any specific knowledge of the application domain. Given a sufficient number of nonlinear terms, represented by a number of hidden-layer neurons, a multilayer neural network can model any mathematical function that is continuous and differentiable (Hecht-Nielsen, 1990). Difficulties can arise however when a network is trained with a limited amount of noisy “real” data and is then expected to operate as part of a system for a specific application. The network must acquire an internal representation, as stored in its weights, during the training phase that subsequently generalizes well to unseen data. In the case of a prediction application, generalization capability becomes the paramount design criteria. The generalization performance of a trained network is a strong function of several factors, including: the architecture and complexity of the network, the type of supervised training rule employed, and the manner in which data is preprocessed and presented to the network.
© 1996 Optical Society of America
PDF ArticleMore Like This
Dennis A. Montera, Byron M. Welsh, Michael C. Roggemann, and Dennis W. Ruck
AThA.5 Adaptive Optics (AO) 1996
George J. M. Aitken
AThA.4 Adaptive Optics (AO) 1996
James Spall, Xianxin Guo, and A. I. Lvovsky
FTu6D.2 Frontiers in Optics (FiO) 2022