Speakers
Christina Bukas
(Helmholtz AI)
Donatella Cea
(Helmholtz AI)
Elisabeth Georgii
(Helmholtz AI)
Erinc Merdivan
(Helmholtz AI)
Harshavardhan Subramanian
(Helmholtz AI)
Helena Pelin
(Helmholtz AI)
Helene Hoffmann
(Helmholtz AI)
Isra Mekki
(Helmholtz AI)
Lisa Barros de Andrade e Sousa
(Helmholtz AI)
Mahyar Valizadeh
(Helmholtz AI)
Marie Piraud
(H.AI/ HMGU)
Rao Muhammad Umer
(Helmholtz Munich)
Sebastian Starke
(Helmholtz AI)
Description
During this course participants will get an introduction to the topic of Explainable AI (XAI). The goal of the course is to help participants understand how XAI methods can help uncover biases in the data or provide interesting insights. After a general introduction to XAI, the course goes deeper into state-of-the-art model agnostic interpretation techniques as well as a practical session covering these techniques. Finally, we will focus on two model specific post-hoc interpretation methods, with hands-on training covering interpretation of random forests and neural networks with imaging data to learn about strengths and weaknesses of these standard methods used in the field.
→ Register here ←
Target audience | Any |
---|---|
Learning target | Participants will gain an understanding and practical experience of classic interpretability methods for Machine Learning and Deep Learning |
Previous experience | Attended course Introduction to Machine Learning and Introduction to Deep Learning (or relevant experience) |
Maximum number of participants | 50 |