Instructors: Anastasiia Iarkaeva
Date: 6 Nov 2024
Time: 9:00-13:00
Room: 4
Format: Workshop
After a brief introduction to the FAIR principles and the significance of automated assessments, participants will engage in hands-on sessions where they will compare the outputs of these tools on a curated list of datasets. The list represents datasets from various repositories that are typical within the biomedical context. Both a generalized overview of FAIR screening results at the repository level, and results for individual datasets will be prepared for the workshop. The workshop will introduce the participants to the FAIR principles, and how they can be translated into executable tests. Thus, it will showcase the different methodologies used by each tool, and how metadata is interpreted and scored, and, more generally, discuss the broader application of FAIR assessments for monitoring purposes.
This workshop provides participants with hands-on experience in evaluating automated tools designed for FAIR data assessment. It contributes to the conference by fostering a deeper understanding of how different tools measure up against the FAIR Principles and by developing an in-depth understanding of such automated tools and their limitations.
Agenda
- Introduction to FAIR Automated Assessment Tools (15 mins)
- Tool Presentations (20 mins)
- Demonstration of the openly accessible dashboard (newly developed for the workshop) for usage during hand-on session.
- Hands-On Comparison Session (120 mins + Break)
- Participants will be divided into groups to work with specific datasets.
- Each group will use the outputs of three tools to analyse their assigned datasets, as well as the technical documentations and/or code of each tool.
- Groups will compare the results with the focus on metadata discoverability by each tool. Usage of Jamboard both during the group work and for following group results presentation.
- Groups will be offered to choose one FAIR Principle for analysis.
- Group Discussions and Presentations (45 mins)
- Wrap-Up and Q&A (15 mins)
Goal
The goal of this workshop is to empower participants with knowledge and practical experience to evaluate and compare the outputs of three (leading) automated FAIR assessment tools: F-UJI, FAIR Enough, and FAIR Checker. Participants will learn how to critically assess the performance of these tools across various datasets, with a focus on understanding their strengths and limitations within the biomedical context. No installation or direct usage of these tools during the workshop is required, participants will rather engage with pre-generated outputs to facilitate comparison and discussion.
In this workshop, we will collect experiences with the application of FAIR screening tools. Ideally, these will be generalizable insights which would be helpful for future users of these tools. In this case, we would publish a summary of the workshop outcomes. The insights gained from this collective expert analysis will contribute to improving FAIRness monitoring of biomedical datasets.
Target group
Not specified. Everyone is welcome to join.
Prerequisites
A general understanding of metadata standards is required. While programming knowledge is not mandatory, familiarity with the underlying scripts of one or more tools would be beneficial for deeper understanding.
Registration
Register here