Towards practical deployments of ML for Systems Security
Keynote
In the last decade, real-world deployments of ML in systems security tasks have skyrocketed; however, despite thousands of papers showing the vulnerability of ML models to various security violations, practitioners still see this research domain with skepticism. Moreover, when ML models are deployed in the security context, they are hard to interpret and generate a lot of false positives, making practitioners doubt of their potential. In this talk, I present an overview of my team's efforts towards simulating realistic deployments closer to real-world threat scenarios. First, I will discuss and motivate four recommendations on how to bridge the gap between academia and industry, especially in relation to adversarial behaviors. Motivated by these findings, I then present some research on evading detection in realistic, fully-blind scenarios, and on using reinforcement learning to perform a risk assessment of (ML) systems.