Abstract
Content addressable networks (CAN) are a family of network learning algorithms for supervised, tutored, and self-organized systems based on binary weights and parallel binary computations. CAN networks directly address the implementation costs associated with high precision weight storage and computation. CAN networks are efficient learning systems with capabilities comparable to analog networks. Supervised CAN systems use error information for weight corrections in a manner analogous to that of backpropagation gradient descent. The tutored CAN network model uses "yes" or "no" feedback as a guide for forming associative categories. The self-organized model derives corrections internally to form recall categories in an adaptive resonance theory style network. The CAN algorithms derive advantages from their intrinsic binary nature and efficient implementation in both optical and VLSI computing systems. CAN solutions for quantized problems may be used directly to initialize analog backpropagation networks. The CAN network has been implemented optically, with optical computation of both recall and learning. Development of supervised CAN networks in VLSI with learning on-chip continues.
© 1992 Optical Society of America
PDF ArticleMore Like This
Stephen A. Brodsky and Clark C. Guest
OWC.3 Optical Computing (IP) 1993
Chii-Maw Uang, Shizhuo Yin, and Francis T.S. Yu
MT1 OSA Annual Meeting (FIO) 1992
Miles Murdocca, John Hall, Saul Levy, and Donald Smith
TuG4 Optical Computing (IP) 1989