Skip to content

Our paper “Where is the Truth? The Risk of Getting Confounded in a Continual World” got accepted as a spotlight at ICML 2025

    Our paper “Where is the Truth? The Risk of Getting Confounded in a Continual World” got accepted at the International Conference on Machine Learning (ICML) 2025 and was selected as a poster spotlight (given to the top 2.6% of all submissions). Big congratulations to shared first authors Roshni & Florian and all co-authors for this achievement.

    In the paper, we introduce a new important phenomenon to study in lifelong & continual learning: continual confounding. In continual confounding, “distracting” features appear and disappear over time and can lead to learned solutions being inadequate, even if a continual learning algorithm manages to accumulate all knowledge. That is, even with full memory of all past data, the sequential nature of confounders poses a significant challenge. A challenge, for which we introduce a new framework and dataset called ConCon. In the paper we show that models trained on ConCon succeed when training on all data, but sequential training on accumulated tasks fails without further action! We hope that this new setting inspires conception of a new generation of continual learning algorithms – algorithms that go beyond a simplified narrative that simply accumulating all knowledge over time is beneficial.

    For more information, please read the full paper. The abstract is provided below:

    A dataset is confounded if it is most easily solved via a spurious correlation which fails to generalize to new data. In this work, we show that, in a continual learning setting where confounders may vary in time across tasks, the challenge of mitigating the effect of confounders far exceeds the standard forgetting problem normally considered. In particular, we provide a formal description of such continual confounders and identify that, in general, spurious correlations are easily ignored when training for all tasks jointly, but it is harder to avoid confounding when they are considered sequentially. These descriptions serve as a basis for constructing a novel CLEVR-based continually confounded dataset, which we term the ConCon dataset. Our evaluations demonstrate that standard continual learning methods fail to ignore the dataset’s confounders. Overall, our work highlights the challenges of confounding factors, particularly in continual learning settings, and demonstrates the need for developing continual learning methods to robustly tackle these.