Fast Learning in Networks of Locally-Tuned Processing Units

被引:2877
作者
Moody, John [1 ]
Darken, Christian J. [1 ]
机构
[1] Yale Comp Sci, POB 2158, New Haven, CT 06520 USA
关键词
D O I
10.1162/neco.1989.1.2.281
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988). We consider training such networks in a completely supervised manner, but abandon this approach in favor of a more computationally efficient hybrid learning method which combines self-organized and supervised learning. Our networks learn faster than backpropagation for two reasons: the local representations ensure that only a few units respond to any given input, thus reducing computational overhead, and the hybrid learning rules are linear rather than nonlinear, thus leading to faster convergence. Unlike many existing methods for data analysis, our network architecture and learning rules are truly adaptive and are thus appropriate for real-time use.
引用
收藏
页码:281 / 294
页数:14
相关论文
共 14 条
[1]  
Anderson D. Z., SYSTEMS, P442
[2]  
[Anonymous], 1982, IEEE T INFORM THEORY
[3]   PREDICTING CHAOTIC TIME-SERIES [J].
FARMER, JD ;
SIDOROWICH, JJ .
PHYSICAL REVIEW LETTERS, 1987, 59 (08) :845-848
[4]  
Hanson S.J., 1987, KNOWLEDGE REPRESENTA
[5]  
Huang W.Y., 1988, NEURAL INFORM PROCES, P387
[6]  
Kohonen T., 1988, SELF ORG ASS MEMORY
[7]  
Lapedes A. S., 1988, COMMUNICATION
[8]  
Lapedes Alan, 1987, TECHNICAL REPORT, P2
[9]  
Lippmann R. P., 1988, Computer Architecture News, V16, P7, DOI [10.1109/MASSP.1987.1165576, 10.1145/44571.44572]
[10]  
MacQueen J., 1967, PROC 5 BERKELEY S MA, V1, P281