Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficiency of computation in neural networks

Not Accessible

Your library or personal account may give you access

Abstract

A neural network with threshold logic and feedback1 is a plausible model for associative memory. It can be used directly to store and retrieve information with robustness and error-correction capability. It can also perform certain computations naturally, such as nearest-neighbor search. However, when a general computational problem is addressed, a special network is designed for the problem based on some energy function. The network is usually of large size, and the solution is approximate. By the efficiency of computation, we mean a quantitative measure of how cost-effective this approach is and how far it is from the theoretical limits. The efficiency of information storage in neural networks has been determined,2 but the computational aspect involves essentially different ideas. The assessment of the raw computational power of a network, the complexity of a problem, and the efficiency of embedding the problem in the network are the main issues.

© 1985 Optical Society of America

PDF Article
More Like This
Distributed Computation in Neural Networks and Their Optical Analogs

Marcus Cohen
PD4 Optical Computing (IP) 1985

Optical Computing and the Hopfield Model

Demetri Psaltis and Nabil Farhat
WB2 Optical Computing (IP) 1985

Acoustooptic implementation of neural network models

Demetri Psaltis, Eung Gl Paek, John Hong, and Nabil H. Farhat
WT6 OSA Annual Meeting (FIO) 1985

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.