Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Minimizing errors in analog neural networks

Not Accessible

Your library or personal account may give you access

Abstract

We prove that the relative error in the result of a single-layer neural network is bounded by the product of the relative error in the analog representation of the interconnect matrix multiplied by the condition number of that matrix. We then show that the condition number is readily minimized in a way that does not affect system design substantially. Of the infinite number of equivalent transformations (so far as the condition number is concerned), some reduce the required dynamic range as well.

© 1988 Optical Society of America

PDF Article
More Like This
Resonance for analog recurrent neural network

Yurui Qu, Ming Zhou, Erfan Khoram, Nanfang Yu, and Zongfu Yu
FW4C.5 CLEO: Fundamental Science (CLEO:FS) 2023

Infernet: a neural network inference machine

James A. Kottas and Cardinal Warde
THHH1 OSA Annual Meeting (FIO) 1988

Deformable Mirror Device Spatial Light Modulators and its Application to Neural Networks

Dean R. Collins, Jeffrey B. Sampsell, James M. Florence, P. Andrew Penz, and Michael T. Gately
ThB1 Spatial Light Modulators and Applications (SLM) 1988

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.