October 15, 2025, 11:00 am - 12:00 am
While artificial intelligence has long played a role in research, it has now entered the daily lives of many researchers with the introduction of generative AI applications such as ChatGPT, Gemini, and others. According to international studies between 31% and 76% of researchers are now working with generative AI, while student use is already at 90%. Additionally, a survey by Wiley found that nearly two-thirds of respondents indicated that the lack of guidelines prevents them from using generative AI to the extent that they would like. Thus, since 2023 research institutions and other stakeholders, like research funders and publishers, have been discussing how the use of generative AI in research should be dealt with from the perspective of research integrity. While a consensus on some matters was quickly reached, other issues are still being debated, particularly due to the ongoing technical advancements in the field of AI. As a result, an increasingly heterogeneous policy landscape is emerging, which still has some blind spots. In this lecture, I will look at existing recommendations and persistent challenges at the intersection of generative AI and research integrity. I will address questions of how to disclose different AI use cases in publications and discuss what is currently considered good research practice concerning AI when it comes to peer review, writing grants or using AI-generated images. In addition, the lecture will raise awareness about issues that are still under debate such as recommendations on which tools to use, questions of access and ethical aspects.
Dr. Katrin Frisch has been working as a researcher at the Ombuds Committee for Research Integrity in Germany since May 2020, specifically at the project "Discussion Hubs to Foster Research Integrity", with a focus on research data and, since 2023, artificial intelligence. Between March 2023 and March 2024, she also worked as a research integrity advisor at the OWID office. The aim of the Discussion Hubs project is to create practical guidelines supplementary to the DFG Code of Conduct "Guidelines for Safeguarding Good Research Practice" as well as producing novel research on research integrity. Recent publications include the FAQ Artificial Intelligence and Research Integrity, a study on authorship and data use conflicts as well as a monograph on fairness in science (both only available in German). You can find an overview of the project and related output here: https://ombudsgremium.de/9806/research-data-and-ai/?lang=en