Selective information processing in neural networks is studied through computer simulations of Pavlovian conditioning data. The model reproduces properties of blocking, inverted-U in learning as a function of interstimulus interval, anticipatory conditioned responses, secondary reinforcement, attentional focusing by conditioned motivational feedback, and limited capacity short-term memory processing. Conditioning occurs from sensory to drive representations (conditioned reinforcer learning), from drive to sensory representations (incentive motivational learning), and from sensory to motor representations (habit learning). The conditionable pathways contain long-term memory traces that obey a non-Hebbian associative law. The neural model embodies a solution to two key design problems of conditioning, the synchronization and persistence problems. This model of vertebrate learning is compared with data and models of invertebrate learning. Predictions derived from models of vertebrate learning are compared with data about invertebrate learning, including data from Aplysia about facilitator neurons and data from Hermissenda about voltage-dependent Ca2+ currents. A prediction is stated about classical conditioning in all species, called the secondary conditioning alternative, and if confirmed would constitute an evolutionary invariant of learning.
© 1987 Optical Society of AmericaFull Article | PDF Article
More Like This
Appl. Opt. 26(23) 4910-4918 (1987)
Michael A. Cohen and Stephen Grossberg
Appl. Opt. 26(10) 1866-1891 (1987)
Jean-Paul Banquet and Stephen Grossberg
Appl. Opt. 26(23) 4931-4946 (1987)