Abstract: | Several suggestons for introducing explicit forgetting in artificial neural networks have been studied for Willshaw Net and Hopfield Net models of distributed, associative memory. Such forgetting allows a network to function as a short-term memory, or "palimpsest". Then continuous learning does not result in eventual catastrophic failure of the memory, but rather the effective storage of a well-defined number of recent memories, accompanied by the progressive forgetting of older ones. The suggestions have been implemented in sizeable networks and their performances compared. They have also been studied mathematically and briefly reviewed from physiological, psychological and implementational perspectives.
|