Nicolás Nieto1,2, Federico Raimondo1, Vera Komeyer1,2,3
1 Institute of Neuroscience and Medicine (INM-7: Brain and Behaviour), Research Centre Jülich, Jülich, Germany
2 Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
3 Department of Biology, Faculty of Mathematics and Natural Sciences, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
When: 10:15 - 18:00
Where: ZEA-1 seminar room, building 3.1U, room 104 [coordinates]
This tutorial and hands-on aim to provide participants with a comprehensive understanding of SHAP for interpreting machine learning models. The scope includes an explanation and exploration of SHAP principles, practical implementations, and considerations for model interpretation. Participants will gain proficiency in leveraging SHAP values to enhance the explainability of machine learning models across various scenarios, including dealing with unbalanced data, collinear features and the nuanced relationship between causality and correlation.
By the end of the course, participants will have a foundation in SHAP, enabling them to effectively evaluate the cases when applying this technic and how to communicate the interpretability of machine learning models in their respective domains.
Timeline
10:15 - 12:00 Tutorial
Introduction to SHAP (15 mins)
- Overview of SHAP principles
- Importance of explainability in machine learning
Why Use SHAP? (15 mins)
- Identifying scenarios for the SHAP application
- Understanding the benefits and limitations of SHAP
- Differentiation of SHAP usage to other XAI tools
When to Use It? And When Not to Use It? (15 mins)
- Providing guidance on when in a project to use SHAP (and when not to)
- Highlighting scenarios (and stages of a project) where SHAP may not be suitable
How to Use SHAP (15 mins)
- Integrating SHAP into machine learning workflows
Caveats: Collinear Features / Causality <-> Correlation / Unbalanced data (20mins)
- Exploring common pitfalls and best practices
Model Interpretation and Reporting (30 mins)
- Translating SHAP outputs into meaningful model interpretations
- Guidelines for reporting SHAP results in research papers
- What we can say about our models with the help of SHAP and what not.
12:15 - 13:00 Lunch
13:15 - 18:00 Hands-on Workshop
Practical Implementation (2 hours)
- Participants work through hands-on exercises using SHAP
- Real-world examples to reinforce understanding
Group Discussions and Q&A (1 hour)
- Addressing participant queries and concerns
- Facilitating group discussions on SHAP application challenges
Project Work (1 hour)
- Participants apply SHAP to their datasets or models
- Guidance and feedback from instructors
Requirements
- Participants with laptops and Anaconda or Miniconda installed
- Basic knowledge in Python programming and (supervised) Machine Learning
Latest details on github [link]