Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Content addressable networks

Open Access Open Access

Abstract

Content addressable networks (CAN) are a family of network learning algorithms for supervised, tutored, and self-organized systems based on binary weights and parallel binary computations. CAN networks directly address the implementation costs associated with high precision weight storage and computation. CAN networks are efficient learning systems with capabilities comparable to analog networks. Supervised CAN systems use error information for weight corrections in a manner analogous to that of backpropagation gradient descent. The tutored CAN network model uses "yes" or "no" feedback as a guide for forming associative categories. The self-organized model derives corrections internally to form recall categories in an adaptive resonance theory style network. The CAN algorithms derive advantages from their intrinsic binary nature and efficient implementation in both optical and VLSI computing systems. CAN solutions for quantized problems may be used directly to initialize analog backpropagation networks. The CAN network has been implemented optically, with optical computation of both recall and learning. Development of supervised CAN networks in VLSI with learning on-chip continues.

© 1992 Optical Society of America

PDF Article
More Like This
Content Addressable Network Implementations

Stephen A. Brodsky and Clark C. Guest
OWC.3 Optical Computing (IP) 1993

A two-layer high-content addressable optical neural network

Chii-Maw Uang, Shizhuo Yin, and Francis T.S. Yu
MT1 OSA Annual Meeting (FIO) 1992

Proposal for an Optical Content Addressable Memory

Miles Murdocca, John Hall, Saul Levy, and Donald Smith
TuG4 Optical Computing (IP) 1989

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.