Abstract
If hardware implementations of neural networks are to be successful new models are required that are simpler to implement. As a first step in this direction a new associative memory model is proposed that is specifically designed for optical implementations. This memory is a modification of Kanerva’s sparse distributed memory,1 which eliminates the negative connection elements by employing sparsely coded representations on all layers. The advantages of sparse coding include improved storage capacity, small weight range, and a simplified learning rule. The implementation of the memory is illustrated by suggesting several optical architectures. The advantages and disadvantages of the model in each case are discussed.
© 1992 Optical Society of America
PDF ArticleMore Like This
A. Von Lehmen, E. G. Paek, L. C. Carrion, J. S. Patel, and A. Marrakchi
WU2 OSA Annual Meeting (FIO) 1989
Demetri Psaltis and Cheol Hoon Park
MDD2 OSA Annual Meeting (FIO) 1986
Shaoping Bian, Kebin Xu, and Jing Hong
THT28 OSA Annual Meeting (FIO) 1989