A Learning Algorithm for Continually Running Fully Recurrent Neural Networks

被引:2757
作者
Williams, Ronald J. [1 ]
Zipser, David [2 ]
机构
[1] Northeastern Univ, Coll Comp Sci, Boston, MA 02115 USA
[2] Univ Calif San Diego, Inst Cognit Sci, La Jolla, CA 92093 USA
基金
美国国家科学基金会;
关键词
D O I
10.1162/neco.1989.1.2.270
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms allow networks having recurrent connections to learn complex tasks that require the retention of information over time periods having either fixed or indefinite length.
引用
收藏
页码:270 / 280
页数:11
相关论文
共 19 条
[1]  
Almeida L. B., 1987, IEEE First International Conference on Neural Networks, P609
[2]  
Bachrach J., 1988, THESIS
[3]  
Elman J.L., 1988, 8801 CRL U CAL
[4]  
Gallant S.I., 1988, P 10 ANN C COGN SCI, P40
[5]   NEURAL NETWORKS AND PHYSICAL SYSTEMS WITH EMERGENT COLLECTIVE COMPUTATIONAL ABILITIES [J].
HOPFIELD, JJ .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA-BIOLOGICAL SCIENCES, 1982, 79 (08) :2554-2558
[6]  
Jordan M.I., 1986, PROC C COGNITIVE SCI, P531
[7]   A SELF-OPTIMIZING, NONSYMMETRICAL NEURAL NET FOR CONTENT ADDRESSABLE MEMORY AND PATTERN-RECOGNITION [J].
LAPEDES, A ;
FARBER, R .
PHYSICA D-NONLINEAR PHENOMENA, 1986, 22 (1-3) :247-259
[8]   OPTIMIZATION OF TIME-VARYING SYSTEMS [J].
MCBRIDE, LE ;
NARENDRA, KS .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1965, AC10 (03) :289-&
[9]  
McClelland J.L., 1986, ENCY DATABASE SYST
[10]  
Mozer M.C., 1988, TECHNICAL REPORT