Details
Location: UCL Roberts Building.
Date: Friday, November 7, 2025.
Time: 1–5 (13:00–17:00)
Lightning talks, distinguished speakers, and networking.
We invite all students, researchers, and professors — and invited guests — to attend!
For inquiries, please contact: a.aldaihan23@imperial.ac.uk
Schedule
- 13:00 – Opening Remarks
- 13:10 – IBM Event Sponsor Talk | Dr. Giulio Zizzo
- 13:30 – Is AI Security doomed? | Dr. Ilia Shumailov
- 14:00 – Test AI Systems, Not Models | Dr. Peter Garraghan
- 14:30 – Lightning Talks | Georgi Ganev, Nataša Krčo, Igor Shilov, Xiaoxue Yang
- 15:00 – Break
- 15:30 – Lightning Talks | Dr. Ilias Tsingenopoulos, Adam Jones, Dan Ristea, Sanyam Vyas
- 16:00 – TBA | Dr. Vasilios Mavroudis
- 16:30 – Vulnerability Response in the Era of AI | Dr. Andrew Paverd
- 17:00 – Closing Remarks
Distinguished Speakers
Dr. Andrew Paverd
Principal Research Manager, Microsoft Security Response Center (MSRC).
Bio: Andrew Paverd is a Principal Research Manager in the Microsoft Security Response Center (MSRC), based in Cambridge. He and his team analyze all reported security and privacy vulnerabilities in Microsoft's AI systems and work to develop new mitigations. His research interests also include web and systems security. He received his DPhil from the University of Oxford in 2016.
Talk Title: Vulnerability Response in the Era of AI
Abstract: Coordinated vulnerability disclosure (CVD) is an industry-standard practice to ensure that vulnerabilities can be disclosed, remediated, and published in a coordinated manner. Many companies have well-established vulnerability response programs to support and even incentivize CVD. Drawing on examples from the Microsoft Security Response Center (MSRC), this talk explores how these programs have changed to both use and protect generative AI systems.
Dr. Vasilios Mavroudis
Principal Research Scientist & Co-Lead, AI for Cyber Defence (AICD), The Alan Turing Institute.
Bio: Dr. Vasilios Mavroudis is a Principal Research Scientist co-leading the AI for Cyberdefence (AICD) Research Center at the Alan Turing Institute in London, UK. His work operates at the intersection of systems security and machine learning, with an emphasis on autonomous cyberdefence, AI cyber risk evaluation, and model-based threat simulation. He leads national-scale efforts including the AI Cyber Risk Benchmark and the International AI Safety Report (2024-25), to which he contributed the section on offensive AI capabilities. His work spans academic, policy, and applied security communities.
Dr. Ilia Shumailov
Founder & CEO, Sequrity.ai; Former Senior Research Scientist, Google DeepMind.
Bio: Dr. Ilia Shumailov holds a PhD in Computer Science from the University of Cambridge. Until recently Ilia was a Senior Research Scientist at Google DeepMind focusing on the intersection of machine learning, privacy, and computer security. At present Ilia runs a company sequrity.ai building tools to secure AI agents of the future.
Talk Title: Is AI Security doomed?
Dr. Peter Garraghan
CEO & Founder, Mindgard; Professor of Distributed Systems, Lancaster University.
Bio: Dr. Peter Garraghan is Founder and Chief Science Officer of Mindgard, Professor in Computer Science at Lancaster University, and fellow of the UK Engineering Physical Sciences and Research Council (EPSRC). He is an internationally recognised expert in AI security, Peter has dedicated years of scientific and engineering expertise to create bleeding-edge technology to understand and overcome growing threats against AI. He has raised over £12 million in research funding, been featured in The Register and Forbes, and published over 70 scientific papers.
Talk Title: Test AI Systems, Not Models
Abstract: LLMs leveraged within GenAI applications and agentic systems introduce a big problem in security. They are opaque, non-intuitive, and probabilistic. This makes it particular difficult to understand, model, or assess their security properties. While AI-driven applications unlock new possibilities, their deployment also introduces new vulnerabilities and amplifies existing threats. Much of the attention in AI security has been on security testing models in isolation, but this approach overlooks the bigger picture. Real-world threats do not emerge from models alone, they manifest when AI is integrated into applications and broader systems. This will cover key challenges within AI security, what's new (and the same), show case real vulnerabilities and exploits, and discuss how academic researchers can help push the field forward.
Sponsors
We gratefully acknowledge support from our sponsors:
Organisers
Abdullah Aldaihan
Abdullah is a PhD student in the Security & Machine Learning (SML) Lab at Imperial under the supervision of Dr. Maffeis. He received his MSc in computer science from Georgia Institute of Technology, and his BSc in computer science from King Saud University. Abdullah's focus is on utilizing Large Language Models (LLMs) for systems security.
Adam Jones
Adam Jones is a PhD student at Imperial under the supervision of Dr. Sergio Maffeis and Dr. Giulio Zizzo. He received his MEng from Imperial in Computer Science. Adam's research is focused on the security of large language models for code generation.
Shae McFadden
Currently a Ph.D. candidate, supervised by Dr. Fabio Pierazzi and Dr. Nicola Paoletti, at King’s College London, investigating the applications of deep reinforcement learning to cybersecurity. A passion for the cross‑section between artificial intelligence and cybersecurity resulted in collaborations with the Systems Security Research Lab (S2Lab) at UCL and the AI for Cyber Defence Research Centre (AICD) at The Alan Turing Institute. Graduated from King’s College London with a first‑class B.Sc. (Hons) in Computer Science specialising in Artificial Intelligence. During undergraduate studies, was awarded the Layton Research Award, the Alan Fairbourn Memorial Prize, and the Associateship of King’s College.
Xin Fan Guo
Xin Fan is a PhD student in the Safe and Trusted Artificial Intelligence Centre for Doctoral Training (STAI CDT) at Imperial under the supervision of Dr. Sergio Maffeis and Dr. Fabio Pierazzi (UCL). She received her BSc in Computer Science from King’s College London. Xin Fan research focuses on safe and trusted AI, with an emphasis on applying model-driven approaches to security, particularly network intrusion detection.
