Our work “BOWL: A Deceptively Simple Open World Learner” has just been accepted at the Fourth Conference on Lifelong Learning Agents (CoLLAs 2025). Congratulations to our PhD student Roshni Kamath, not only for this acceptance, but also for getting two papers accepted in sequence, following the recent success of our ConCon paper at ICML!
As implied by title, we introduce a very strong baseline for Open World Learning. Such a baseline has previously been missing in the field of Open World Learning – a challenging undertaking in Machine Learning that aims to simultaneously i) detect whether data points belong to known or unknown tasks, ii) whether data points that are novel and belong to the task a model is trying to solve are informative, and iii) continually learn from new informative data points. Individually, each objective is challenging and the sub-fields of “out-of-distribution detection”, “active learning” and “continual learning” are trying to address aspects in isolation. A standard machine learning model typically suffers from catastrophic failure modes across each of these dimensions, making regular machine learning impractical for real-world application. While there exist complex ways to enable open world learning in the literature, in BOWL, we show that any deep neural network that contains batch-normalization layers can directly be turned into a very effective open world learner! To this end, we leverage the statistics tracked by batch-normalization and formulate rigorous mathematical mechanisms based on the learned Gaussian distributions to address the individual functions required for the model to interact with an open world. Overall, BOWL is thus a lightweight monolithic approach that easily turns a deep neural network significantly more robust, equips it with effective memory management, and the ability to adapt rapidly.
For more information, read the full paper. The abstract is provided below:
Traditional machine learning excels on static benchmarks, but the real world is dynamic and seldom as carefully curated as test sets. Practical applications may generally encounter undesired inputs, are required to deal with novel information, and need to ensure operation through their full lifetime – aspects where standard deep models struggle. These three elements may have been researched individually, but their practical conjunction, i.e., open world learning, is much less consolidated. In this paper, we posit that neural networks already contain a powerful catalyst to turn them into open world learners: the batch normalization layer. Leveraging its tracked statistics, we derive effective strategies to detect in- and out-of-distribution samples, select informative data points, and update the model continuously. This, in turn, allows us to demonstrate that existing batch-normalized models can be made more robust, less prone to forgetting over time, and be trained efficiently with less data.