This Friday 05-02-2021, 5.30pm CET, for the ContinualAI Reading Group, Anh Thai (Georgia Institute of Technology) presented the paper:
Title: Does Continual Learning = Catastrophic Forgetting?
Abstract: Continual learning is known for suffering from catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we challenge the assumption that continual learning is inevitably associated with catastrophic forgetting by presenting a set of tasks that surprisingly do not suffer from catastrophic forgetting when learned continually. We attempt to provide an insight into the property of these tasks that make them robust to catastrophic forgetting and the potential of having a proxy representation learning task for continual classification. We further introduce a novel yet simple algorithm, YASS that outperforms state-of-the-art methods in the class-incremental categorization learning task. Finally, we present DyRT, a novel tool for tracking the dynamics of representation learning in continual models. The codebase, dataset and pre-trained models released with this article can be found at this https URL.
The event was moderated by: Vincenzo Lomonaco.
You can find the slides, the paper and more materials in our forum: https://continualai.discourse.group/t/continualai-reading-group-does-continual-learning-catastrophic-forgetting/228
------------------------ About ContinualAI ------------------------
ContinualAI is an official non-profit research organization and the largest open community on Continual Learning for AI. We aim at connecting people and working better together on this fascinating topic we consider fundamental for the future of AI.
• Official website: https://www.continualai.org
• Join us now at https://www.continualai.org/join_us
Please consider supporting us with a small donation at: https://www.continualai.org/supporters
It's thanks to people like you that we are making ContinualAI a reality!