Overcoming catastrophic forgetting in neural networks

《Overcoming catastrophic forgetting in neural networks》

https://arxiv.org/pdf/1612.00796v1.pdf

The ability to learn tasks in a sequential fashion is crucial to the development of
artificial intelligence. Neural networks are not, in general, capable of this and it
has been widely thought that catastrophic forgetting is an inevitable feature of
connectionist models. We show that it is possible to overcome this limitation and
train networks that can maintain expertise on tasks which they have not experienced
for a long time. Our approach remembers old tasks by selectively slowing down
learning on the weights important for those tasks. We demonstrate our approach is
scalable and effective by solving a set of classification tasks based on the MNIST
hand written digit dataset and by learning several Atari 2600 games sequentially.

    原文作者:朱小虎XiaohuZhu
    原文地址: https://www.jianshu.com/p/365811d784b1#comments
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞