🚀 Elevate your research and innovation by joining our RAG (Retrieval-Augmented Generation, RAG) course, designed specifically for scientists. This course empowers you to harness cutting-edge AI techniques that seamlessly integrate information retrieval with advanced language generation.
💡 Learn how to build intelligent systems that access and synthesize vast knowledge bases, enabling faster insights, deeper analysis, and more informed decision-making. Whether you're advancing your own research or building tools for the scientific community, Retrieval Augmented Generation (RAG) offers a powerful leap forward. Transform the way you work with information—let science meet the future of AI.
Details
📅 When? May 19-20, 2025; 9am to 5pm each day
📍Where? Helmholtz-Zentrum Dresden-Rossendorf, Building 270, seminar room ground floor (102)
🎯 Who? Scientists, ML enthusiasts and Software Developers
🔬What? Bring a laptop, some python experience and curiosity
Speaker
Dr. Oliver Guhr is a distinguished scientist and software engineer specializing in artificial intelligence, with a particular focus on voice interfaces and large language models (LLMs). He earned his Ph.D. in AI from the Technical University of Dresden, where his research centered on developing voice interfaces for socially assistive robots in healthcare settings.
As the co-founder of Impact Labs GmbH, Dr. Guhr is at the forefront of creating next-generation voice interaction technologies . His contributions to the AI community have been recognized with the Microsoft Most Valuable Professional (MVP) award for Azure AI Services in 2024.
Dr. Guhr has developed several influential models, including the "FullStop" multilingual punctuation prediction model and the "German Sentiment BERT" model , both of which are widely utilized in natural language processing applications. His active presence on platforms like Hugging Face and GitHub underscores his commitment to open-source development and knowledge sharing .
Course Content
Day 1:
-
How do LLMs work and how are they trained?
-
How do I write a prompt? (with exercises)
-
How do I integrate LLMs into my projects using the OpenAI API (with Blablador)?
-
Exercise: What is streaming / function calling / structured outputs?
-
What are embedding models?
-
How does neural search work?
-
How does RAG work?
Day 2:
-
RAG exercises – Part 2
-
How do I evaluate/test LLM-based software?
-
Mini-hackathon: Participants bring tasks and data from their own projects.