A threat-specific look at Privacy-Preserving Machine Learning
Keynote
The hope to train machine learning models whilst ensuring the privacy of their training data is within reach, but it requires good care. To succeed, one needs to carefully analyse how and where they plan to deploy the model, and decide which threats are worrisome for the particular application (threat modelling). Luckily, more than 20 years of research in the area can help a lot in this endeavour. This talk gives an introduction to privacy preserving machine learning (PPML). We will look at the basic threats against the private training data of a machine learning model, at what defence mechanisms researchers devised to counter them, and what are the research opportunities for the future. We'll then briefly discuss recent techniques for evaluating ML models' security against specific privacy attacks.