Speaker
Description
Bioimaging is an important methodological procedure widely applied in life sciences. Bioimaging unites the power of microscopy, biology, biophysics and advanced computational methods allowing scientists to study different biological functions at the level of the single molecules and up to the complete organism. In parallel, high-content screening (HCS) bioimaging approaches are powerful techniques consisting of the automated imaging and analysis of large numbers of biological samples, to extract quantitative and qualitative information from the images. HCS bioimaging plays a crucial role in advancing our understanding of cellular processes, disease mechanisms, and drug development by enabling the rapid analysis of large-scale biological data.
However, HCS still presents several bottlenecks restraining these approaches from exerting their full potential for scientific discoveries. As major example, a huge amount of metadata is generated in each experiment, capturing critical information about the images. The efficient and accurate treatment of image metadata is of great importance, as it provides insights that are essential for effective image management, search, organisation, interpretation, and sharing. It is vital to find ways to properly deal with the huge amount of complex and unstructured data for implementing Findable, Accessible, Interoperable and Reusable (FAIR) concepts in bioimaging.
In the frame of NFDI4BioImaging (the National Research Data Infrastructure focusing on bioimaging in Germany), we want to find viable solutions for storing, processing, analysing, and sharing HCS data. In particular, we want to develop solutions to make findable and machine-readable metadata using (semi)automatic analysis pipelines. In scientific research, such pipelines are crucial for maintaining data integrity, supporting reproducibility, and enabling interdisciplinary collaboration. These tools can be used by different users to retrieve images based on specific attributes as well as support quality control by identifying appropriate metadata.
In the present study, we proposed an automated analysis pipeline for storing, processing, analysing, and sharing HCE bioimaging data. The (semi)automatic workflow was developed by taking as a case study a dataset of zebrafish larvae images previously obtained from an automated imaging system generating data in an HCS fashion. In our workflows, zebrafish images are automatically enriched with metadata (i.e. key-value pairs, tags, raw data, regions of interest) and uploaded to the UFZ-OME Remote Objects (OMERO) server using python scripts embedded in workflows developed with KNIME or GALAXY. The workflows give the possibility to the user to intuitively fetch images from the local server and perform image analysis (i.e. annotation) or even more complex toxicological analyses (dose response modelling). Furthermore, we want to improve the FAIRness of the protocol by adding a direct upload link to the Image Data Resource (IDR) repository to automatically prepare the data for publication and sharing.
In addition please add keywords.
bioimaging, NFDI, zebrafish, KNIME, GALAXY, OMERO, automatic workflows
Please assign your contribution to one of the following topics | Metadata annotation and management close to the research process |
---|---|
Please assign yourself (presenting author) to one of the stakeholders. | Researchers |