Join our next lecture from our series on AI and LLMs with Sahar Abdelnabi from the Microsoft Security Response Center!
June 6, 2024, 4:30 - 5:15 pm
Large Language Models (LLMs) are integrated into many widely used and real-world applications and use-case scenarios. With their capabilities and agentic-like adoption, they open new frontiers to assist in various tasks. However, they also bring new security and safety risks. Unlike previous models with static generation, LLMs’ nature of dynamic, multi-turn, and flexible functionality makes them notoriously hard to robustly evaluate and control. This talk will cover some of these new potential risks imposed by LLMs, how to evaluate them, and the challenges of mitigations.
Sahar Abdelnabi is an AI security researcher at Microsoft Security Response Center (Cambridge, UK). Previously, she was a PhD candidate at CISPA Helmholtz Center for Information Security, advised by Prof. Dr. Mario Fritz and she obtained her MSc degree at Saarland University. Her research interests lie in the broad intersection of machine learning with security, safety, and sociopolitical aspects. This includes the following areas: 1) Understanding and mitigating the failure modes of machine learning models, their biases, and their misuse scenarios. 2) How machine learning models could amplify or help counter existing societal and safety problems (e.g., misinformation, biases, stereotypes, cybersecurity risks, etc.). 3) Emergent challenges posed by new foundation and large language models.