Join our next lecture from our series on AI and LLMs with Lea Schönherr from CISPA, the Helmholtz Center for Information Security!
!The recording of the lecture is accessible below!
July 25, 2024, 11:00 am - 12:00 pm
Generative AI (genAI) is becoming more integrated into our daily lives, raising questions about potential threats within genAI systems and their outputs. In this talk, we will take a closer look at the resulting challenges and security threats associated with generative AI. These relate to two possible categories: malicious inputs used to inject into generative models and the misuse of computer-generated output.
In the first case, specially designed inputs are used to exploit models such as LLMs to disrupt alignment or to steal sensitive information. Existing attacks show that content filters of LLMs can be easily bypassed with specific inputs and that private information can be leaked. Also, established methods in the adversarial machine learning field cannot be easily transferred to generative models. We showed that an alternative for protecting intellectual property can be the obfuscation of prompts and demonstrate that with only some overhead we can achieve similar utility while protecting confidential data.
In the second threat scenario, generative models are utilized to produce fake content that is impossible to distinguish from human-generated content. This fake content is often used for fraudulent and manipulative purposes, and impersonation and realistic fake news are already possible using a variety of techniques. As these models continue to evolve, detecting these fraudulent activities will become increasingly difficult, while the attacks themselves are easier to automate and require less expertise. This talk will provide an overview of the current challenges we are facing in detecting fake media in human and machine interactions.
Dr. Lea Schönherr is a tenure-track faculty at CISPA Helmholtz Center for Information Security interested in information security with a focus on adversarial machine learning. She received her Ph.D. in 2021 from Ruhr University Bochum, where she was advised by Prof. Dr.-Ing. Dorothea Kolossa at the Cognitive Signal Processing group at Ruhr University Bochum (RUB), Germany. She received two scholarships from UbiCrypt (DFG Research Training Group) and CASA (DFG Cluster of Excellence).