Dr Giulio Zizzo

Dr Giulio Zizzo

Research Staff Member at IBM Research

Giulio is a Research Staff Member at IBM Research. He is part of the Security and Privacy team researching secure and robust machine learning. Giulio obtained his PhD from Imperial College London focusing on adversarial machine learning. During his PhD he worked with FeedForward AI, a startup developing AI solutions for the music industry. Prior, he worked at BAE Systems and interned at Kamioka Observatory, a Nobel prize winning institution.



Scroll down for more details...


Certified Federated Adversarial Training

Keynote Talk

Federated Learning (FL) is a recent paradigm to train neural networks in a decentralised manner without users needing to share their data. Instead, users only share a locally trained model with a central server to update a globally shared model. This offers privacy advantages for the users as their data remains local, and significant compute savings for the central server as it does not need to perform the model training itself.

I will begin with an overview of the current security challenges in FL. Despite tantalising us with the vision of private and decentralised training, in practice there are several problems. FL is not as private as we had first hoped. Recent works have shown in certain cases a user's data can be exactly reconstructed. Even abandoning the idea of privacy, and just retaining the compute advantages, FL leaves us with a system that is highly exposed to attacks. In fact, the classical FL setup is completely vulnerable to just a single malicious user.

To meet these challenges, there has been research on construing robust FL training algorithms to block the influence of potentially many malicious users collaborating to subvert a ML model. However, these robust algorithms fundamentally require the number of malicious users to be constrained in relation to the number of benign users. All bets are off if this is not the case, and in that situation the attackers can arbitrarily control the resulting model. Benign user numbers can vary as they can join a FL training round at will, or participate based on factors such as idle system status, or power and WiFi connectivity. On the other hand, attackers can concentrate their numbers and efforts at key points to gain control of the model.

I will then present some of my recent research aiming for security guarantees in this scenario. To be able to defend in such a hostile environment, we narrow our focus on achieving secure adversarial training in a federated fashion. We turn to methods of certifying neural networks arising out of abstract interpretation to give us the tools needed to analyse a user's model updates in a principled manner. With this, we can detect stealthy attacks and block corrupted model updates. This defence can preserve adversarial robustness even against arbitrary numbers of adaptive attackers with perfect knowledge.

Finally, to give a broader outlook on FL and machine learning security I will conclude with a high level overview of the wider research undertaken by the ML Security and Privacy team at IBM Research. This includes stealthily embedding entire datasets in GANs, homomorphic encryption in FL, machine unlearning, and the development of several open source toolkits.