Detecting and Adapting Real Concept Drift in multi-classification DNN-based NIDS
Lightning Talk
Concept Drift; Real Concept Drift; Concept Drift Detection and Adaptation
Network Intrusion Detection Systems (NIDS) that utilize Deep Neural Networks (DNN) have shown exceptional results in closed-world settings, with accuracy levels surpassing 98%. Nevertheless, in open-world scenarios, these systems encounter challenges, especially in detecting and dismissing unfamiliar categories, referred to as real concept drift. Presently, researchers are exploring the use of the model's confidence score to discard uncertain predictions thought to be instances of real concept drift. However, supervised DNN models tend to exhibit overconfidence when faced with out-of-distribution data, leading to questions about the dependability of their confidence scores.
In this talk, I will illustrate that the confidence scores of supervised DNN-based NIDSs cannot be reliably used to identify real concept drift cases. Furthermore, I will delve into the potential of fitness functions, such as supervised contrastive learning, for detecting drifted network flows. Subsequently, I will present our novel framework that employs a distance function to efficiently detect and adapt to real drifts.