Helmholtz Imaging Annual Conference 2024

Europe/Berlin
FRAUENBAD Heidelberg

FRAUENBAD Heidelberg

Bergheimer Strasse 45 69115 Heidelberg
Christian Schroer (DESY), Klaus Maier-Hein (DKFZ), Martin Burger (DESY, FS-CI and Helmholtz Imaging)
Description

The Helmholtz Imaging Conference is back in 2024, and this time, we'll be meeting in Heidelberg on 14-15 May. The conference is open to all scientists across all Helmholtz research fields, and is tailored for scientists and researchers engaged in imaging research or utilizing imaging techniques. Join us for exciting keynotes, discussions, scientific talks, poster sessions, and more. It's a fantastic opportunity to expand your network, present your work, stay updated on imaging trends, and be part of shaping the future of Helmholtz Imaging. REGISTER NOW and stay tuned for more information to come! 

***LAST CHANCE - THERE ARE STILL PLACES AVAILABLE!!! Register now and secure your participation in the conference!***

We are once again putting together an exciting program for you. Take a look at what you can expect:

  • Keynotes by Zeynep Akata (Helmholtz AI, TUM & Helmholtz Munich), Joost Batenburg (Leiden University) and Guido Grosse (AWI)
  • Poster session with Helmholtz Imaging projects area
  • BYOIC - Bring Your Own Imaging Challenge!
  • and much more

 

Our Keynote Speaker:

Zeynep Akata is a Liesel Beckmann Distinguished professor of Computer Science at Technical University of Munich and the director of the Institute for Explainable Machine Learning at Helmholtz Munich. 
Joost Batenburg is a professor at Computer Science institute of Leiden University (LIACS) and his chair is Imaging and Visualization. He is also affiliated with the national research institute for mathematics and computer science in the Netherlands (CWI) and is program director of the interdisciplinary research program Society, Artificial Intelligence and Life Sciences (SAILS).
Guido Grosse is a professor at the Alfred Wegener Institute in Bremerhaven (AWI). He is the head of the Permafrost Research sectionof the institute. He studies the state of Arctic permafrost in Siberia, Canada, Alaska and Svalbard and is investigating how vulnerable permafrost is to warming, how quickly and where it thaws and what the effects of its thawing are on landscapes, the water cycle and biogeochemical processes such as the carbon cycle. 

 

à BYOIC - Bring Your Own Imaging Challenge ß 

You can't always get what you want? - Yes, you can!

What is currently a crucial bottleneck in your community? What can we do to help you make spectacular progress? 

We are interested in your everyday problems as well as in questions that you have been putting aside for years. Making others aware of your research problems and bringing together experts from different fields could be the missing spark to solve your problems.

At the conference, you will give a super-short pitch on your topic, followed by 45 minutes of discussion with the community. Let's see if someone comes up with THE idea for a solution.

Send your question and a short description so that we can plan to: contact@helmholtz-imaginge.de, subject: BYOIC and let the community work on your problems.

    • 09:00 09:30
      Registration & Coffee
      Conveners: Katharina Kriegel (Helmholtz Imaging), Sabine Niebel (Helmholtz Imaging)
    • 09:30 10:00
      Welcome & Introduction
    • 10:00 10:45
      Thematic Session: Image Analysis - part I

      Three talks, 15 min. each

      Convener: Klaus Maier-Hein (DKFZ)
      • 10:00
        Quantification of human skin biomarkers for disease characterization by optoacoustic mesoscopy with machine learning 15m

        Non-invasive quantification of the anatomical features of human skin can lead to improved identification of vascular and other features associated with a number of diseases. Ultra-wideband raster-scan optoacoustic mesoscopy (RSOM) is a novel modality that has demonstrated unprecedented ability to visualize epidermal and dermal features in vivo. This ability can be used to prognose dermatological diseases and monitor treatment responses in a non-invasive manner based on quantified skin anatomical and microvasculature features. However, automatic and quantitative analysis of three-dimensional RSOM datasets remain a challenge. To address this challenge, we have developed a deep learning-based framework, termed Deep Learning RSOM Analysis Pipeline (DeepRAP), to analyze and quantify morphological skin features recorded by RSOM and extract imaging biomarkers for disease characterization. DeepRAP uses a two-layer segmentation strategy based on a convolutional neural network with a transfer learning approach. This strategy enabled automatic recognition of the skin layers according to their morphological structure and subsequent segmentation of the dermal microvasculature with an accuracy equivalent to human assessment. The combination of RSOM and DeepRAP analysis presents an attractive solution to image and quantify morphology and functional changes in the skin, with the potential to improve diagnostic and prognostic applications for skin and circulatory pathologies.

        Speaker: Hailong He (Helmholtz Munich)
      • 10:15
        Trustworthy Unsupervised ML Model for Drawing Coastlines and Creating Benchmark Dataset 15m

        Automatic drawing of coastlines with satellite imagery is a crucial factor in detection coastline shifts due to global climatic changes. On the other hand, unavailability of labelled information poses a challenge. We propose a trustworthy unsupervised AI solution to automatically draw coastlines in the Baltic sea area to create a ‘pre-labelled’ dataset, which clearly delineates the boundary pixels between sea and land. We have applied state-of-the art methods to find the optimal number of clusters to understand the unknown semantic classes in the dataset.

        Speaker: Chandrabali Karmakar (DLR)
      • 10:30
        Volumetric Optical Flow Network for Digital Volume Correlation of Synchrotron Radiation-based Micro-CT Images of Bone-Implant Interfaces 15m

        In materials science research, digital volume correlation (DVC) analysis is commonly used to track deformations and strains to elucidate morphology-function relationships. Optical flow-based DVC is particularly popular because of its robustness to estimate the correlation as a dense deformation vector. Recently, computer vision researchers showed that network-based optical flow approaches can outperform classical iterative optical flow approaches. This finding has increased the interest to apply machine learning based optical flow methods for DVC.
        In this work, we present a supervised machine learning approach for digital volume correlation. This approach extends the state-of-the-art network-based optical flow method, RAFT, from 2D images to 3D volumes such that it predicts the volumetric displacement vector from the input volume pairs. Experiments show that this volumetric network performs well in estimating different displacement fields when compared to cutting-edge iterative DVC methods for bone-implant materials based on high resolution synchrotron-radiation micro-computed tomography imaging data.

        Speaker: Tak Wong (HEREON)
    • 10:45 11:15
      Coffee Break 30m
    • 11:15 12:00
      Keynote by Zeynep Akata: Interpretable Vision and Language Models

      Representation learning, foundation models, explainable AI
      30 Minutes Talk + 15 Minutes Discussion

      Convener: Klaus Maier-Hein (DKFZ)
      • 11:15
        Interpretable Vision and Language Models 45m

        Clearly explaining a rationale for a visual classification decision to an end-user can be as important as the decision itself. For the communication to be effective, the decision maker needs to recognize the class-discriminative properties of the object that are present in the image but also it needs to understand the intend of the communication partner. In this talk, I will present my past and current work on Explainable Machine Learning focusing on large vision and language models where we show (1) how to learn compositional representations of images that go beyond recognition towards understanding, (2) how to generate visual features using natural language descriptions when no visual data is available to train deep models, and (3) how our models focus on discriminating properties of the visible object, jointly predict a class label, explain why/not the predicted label is chosen for the image.

    • 12:00 12:30
      Thematic Session: Image Analysis - part II

      Two talks, 15 min. each

      Convener: Klaus Maier-Hein (DKFZ)
      • 12:00
        DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology 15m

        In hematology, computational models offer significant potential
        to improve diagnostic accuracy, streamline workflows, and reduce
        the tedious work of analyzing single cells in peripheral blood or bone
        marrow smears. However, clinical adoption of computational models has
        been hampered by the lack of generalization due to large batch effects,
        small dataset sizes, and poor performance in transfer learning from natural
        images. To address these challenges, we introduce DinoBloom, the
        first foundation model for single cell images in hematology, utilizing a
        tailored DINOv2 pipeline. Our model is built upon an extensive collection
        of 13 diverse, publicly available datasets of peripheral blood and
        bone marrow smears, the most substantial open-source cohort in hematology
        so far, comprising over 380,000 white blood cell images. To assess
        its generalization capability, we evaluate it on an external dataset with
        a challenging domain shift. We show that our model outperforms existing
        medical and non-medical vision models in (i) linear probing and
        k-nearest neighbor evaluations for cell-type classification on blood and
        bone marrow smears and (ii) weakly supervised multiple instance learning
        for acute myeloid leukemia subtyping by a large margin. A family of
        four DinoBloom models (small, base, large, and giant) can be adapted
        for a wide range of downstream applications, be a strong baseline for
        classification problems, and facilitate the assessment of batch effects in
        new datasets.

        Speaker: Valentin Korbinian Koch (HMGU)
      • 12:15
        Live-cell mAIcroscopy - Cracking the challenge to image living cells with real-time event attention 15m

        Live-cell mAIcroscopy - Cracking the challenge to image living cells with real-time event attention

        Authors: Johannes Seiffarth 1,4 | Matthias Pesch 1 | Lukas Scholtes 3 | Erenus Yildiz 3 | Richard Paul 3 | Nils Friederich 2 | Angelo Jovin Yamachui Sitcheu 2 | Ralf Mikut 2 | Hanno Scharr 3 | Dietrich Kohlheyer 1 | Katharina Nöh 1

        Affiliations: 1 IBG-1, Forschungszentrum Jülich (FZJ) | 2 Karlsruher Institut für Technologie (KIT) | 3 IAS-8 , Forschungszentrum Jülich (FZJ) | 4 CSB.AVT, RWTH Aachen University

        Abstract

        Microfluidic live-cell imaging (MLCI) unlocks unique insights into living cells, their development over time, and their response to environmental cues. Exploiting high-throughput lab-on-chip devices in connection with modern automated microscopes provides unprecedented detail on the single-cell level while capturing natural biological variations by large amounts of observations. So far, the technology's focus has been on observing microbial life within an experiment through first recording microscope images, which are only subsequently analyzed to gain quantitative insights. Now, emerging AI-driven real-time image analysis empowers us to obtain such insights in real-time, for instance, to detect specific events during the experiment. This bears the opportunity to shift our position in MLCI from being retrospective analyzers to becoming AI-assisted drivers, facilitating a wide range of opportunities to exert control on microbial life in the running experiment.

        In this talk, we present our event-driven ultrahigh-throughput MLCI platform that we develop within the Helmholtz imaging project EMSIG. We show that software-based microscope control, real-time event detection, and response scheduling fundamentally change the opportunities in MLCI experimentation. The new platform accelerates MLCI experiments, leads to ultrahigh-throughput and large-scale data acquisition, standardizes experimental procedures using software, introduces real-time insights using fast AI image processing, and enables us to perform event-driven experimentation, widening control on microbial populations during the conduction of the experiment.

        With our contribution, we showcase that the tight interconnection of imaging hardware and real-time AI image analysis into a closed feedback loop bears the opportunity to introduce a paradigm shift in MLCI experimentation that heralds a new era in live-cell analysis.

        Speaker: Johannes Seiffarth (FZ Jülich)
    • 12:30 13:30
      Lunchbreak 1h
    • 13:30 14:30
      BYOIC - Bring Your Own Imaging Challenge
    • 14:30 15:15
      Keynote by Joost Batenburg: Real-time Imaging Pipelines for Tomography

      Real-time Imaging Pipelines
      30 Minutes Talk + 15 Minutes Discussion

      Convener: Martin Burger (DESY, FS-CI and Helmholtz Imaging)
      • 14:30
        Real-time Imaging Pipelines for Tomography 45m

        Computed Tomography (CT) is a powerful technique for 3D imaging of the interior of a wide range of objects, with applications in medicine, industry, and science. As a mixed experimental-computational method, it can be combined with various imaging modalities including X-ray imaging, optical imaging, and electron microscopy. In this lecture I will discuss the various challenges involved in speeding up the tomography pipeline towards the point where the object can be analyzed in full 3D during the scan. Real-time 3D imaging leads to the opportunity of adjusting the scanning process in real-time, which in turn paves the way for developing “intelligent” CT-systems that can interact with the operator to achieve more informative data acquisition.

    • 15:15 15:45
      Thematic Session: Data Acquisition / Image Formation - part I

      Two talks, 15 min. each

      Convener: Martin Burger (DESY, FS-CI and Helmholtz Imaging)
      • 15:15
        Shedding Light on Enclosed Cuneiform Tablets Using a Mobile X-ray Tomography Setup at the Museum 15m

        Cuneiform represents the earliest form of writing developed by the Sumerians in Mesopotamia in the second half of the fourth millennium BCE. It was used for more than three millennia all around the Middle East. To protect the clay tablets from damage and ensure confidentiality, tablets were encased in clay envelopes. Reading the message required breaking the envelope and, consequently, the artistic seal. However, some letters never reached their recipient and remained within their clay envelopes.

        To investigate the inner structure of such historical artefacts non-destructively, a mobile tomographic X-ray scanner (ENCI) has been developed during the last years. It became operational for the first time in a recent measurement campaign at the Louvre Museum in Paris. In this contribution, we introduce the new tomographic X-ray scanner and the data evaluation pipeline from data acquisition, tomographic reconstruction, image segmentation and the final 3D data visualization. In this way, we gain a detailed insight into the materiality and fashioning of cuneiform clay tablets and make cuneiform writings visible that remained hidden and unread for thousands of years.

        Speaker: Dr Andreas Schropp (Center for X-ray and Nano Science CXNS, Deutsches Elektronen-Synchrotron DESY)
      • 15:30
        SmartPhase: Start to End Holotomography 15m

        The goal of SmartPhase is to simplify the reconstruction of near-field holograms (NFH) obtained at experimental endstations at synchrotron radiation facilities e.g. PETRA III at DESY,Hamburg. NFH is a phase-sensitive microscopy modality which allows to image small density variations in samples, also under in-situ and operando conditions.
        These kind of experiments require continous feedback to the experimentalist. We have developed an algorithmic framework which can robustly recover the phase and is able to perform an optimization of the propagation distance, i.e. bringing the reconstruction 'in focus', from a single measured hologram. We have been further working on an automatization of the whole reconstruction pipeline, so that users of the experiment can take their processed data to their home institution. We will report on our current state to achieve the different aspects of this project (online view, parameter optimization and reconstruction pipeline).

        Speaker: Johannes Hagemann (DESY)
    • 15:45 16:15
      Coffee Break 30m
    • 16:15 16:45
      Thematic Session: Data Acquisition / Image Formation - part II

      Two talks, 15 min. each

      Convener: Martin Burger (DESY, FS-CI and Helmholtz Imaging)
      • 16:15
        The SAXS/WAXS Imaging capabilities at the SAXSMAT beamline P62 at DESY. 15m

        Within this contribution, the technique of Small- and Wide-Angle X-ray Scattering (SAXS/WAXS) will be introduced in the context of nanostructure characterization in 3D. The focus is on the 3D orientation analysis of fiber-based structures using SAXS tensor tomography.

        Three selected examples will be shown:

        • Orientation analysis of collagen in breast cancer tissues. An understanding of the 3D architecture of the tumor environment is important to guide the development of novel therapeutic approaches. Especially the collagen orientation, diameter, and metal accumulation are of interest.
        • Myelin quantification and orientation in multiple sclerosis (MS) human brain sections. Since MS is a demyelinating disease, quantifying the changes in myelin levels, integrity, and neuronal tracts enables an unprecedented understanding of the disease progression.
        • What can be done with SAXS/WAXS imaging in the research field of archaeology? An example of a heritage object investigation will be shown.
        Speaker: Sylvio Haas (DESY)
      • 16:30
        Confocal microscopy in a controlled atmosphere for nano-scale nuclear magnetic resonance spectroscopy 15m

        Nitrogen-vacancy (NV) centers in diamond can be used as quantum sensors with various applications, such as nano-scale NMR spectroscopy thanks to its magnetic field sensitivity. Interaction between NV centers and external spins (on the diamond surface) allows to use the former for initialization, control, and read-out of potential qubits. It has been proposed to use the latter to implement a quantum simulator on a diamond surface with shallowly implanted NV centers1. A monolayer of a 2D material could be one possible realization of such quantum simulator. A flake of black phosphorus is an ideal candidate due to the 100 % natural abundance of 31P isotope (nuclear spin I = ½) and its large gyromagnetic ratio (γ = 1.73 kHz/G), which does not overlap with other ubiquitous nuclear spins like protons and 13C. The main disadvantage of this material is that it degrades under ambient conditions. In order to resolve this, we present a confocal microscope with a glovebox enclosure for performing NV-based NMR spectroscopy in inert gas atmosphere. We perform confocal microscopy imaging and optically detected magnetic resonance in controlled magnetic field on several layers of black phosphorus. This setup could be useful to study other oxygen sensitive molecules and 2D materials that alter their properties upon exposure to air or moisture.

        Speaker: Kseniia Volkova (HZB)
    • 16:45 18:45
      Poster Session
      Convener: Martin Burger (DESY, FS-CI and Helmholtz Imaging)
    • 18:45 19:15
      Best Scientific Image Contest Awarding
      Convener: Dr Dagmar Kainmüller (MDC Berlin)
    • 19:15 21:15
      Conference Dinner 2h
  • Wednesday 15 May
    • 09:00 09:10
      Opening Day 2 10m
    • 09:10 09:25
      Image Contest - Winning Contribution
      Convener: Christian Schroer (DESY)
    • 09:25 10:10
      Keynote by Guido Grosse: Exploring new Avenues for Imaging the World of Changing Arctic Permafrost – From Remote Sensing to Foundation Models
      Convener: Christian Schroer (DESY)
    • 10:10 10:40
      Helmholtz Foundation Model Initiative (HFMI)

      Two talks on funded HFMI-Projects

      Convener: Christian Schroer (DESY)
      • 10:10
        The Human Radiome Project - Unlocking 3D Radiological Data Analysis with Next-Generation Foundation Models 15m

        The Human Radiome Project consolidates an extensive and diverse collection of 3D radiological images such as MRI and CT scans into a foundational AI model, striving to deepen our understanding of human anatomy and pathologies. By representing the full spectrum of radiologic information, the “Human Radiome” is set to enhance personalized medicine, seamlessly integrating with non-imaging modalities, such as language or genomics, for intuitive application and powerful clinical decision making.

        Speaker: Paul Jäger (DKFZ)
      • 10:25
        Synergy Unit: Developing, providing and networking foundation models 15m

        While the individual projects concentrate on their specific problems, a Synergy Unit focuses on overarching issues that are relevant to all participating projects. For example, it deals with questions relating to the scalability of the models or training with the data sets. However, this is not simply a matter of exchanging solution approaches, but rather the central question of how research on foundation models can be advanced as quickly as possible across disciplinary boundaries. In this way, the Synergy Unit ensures a long-term impact of the Helmholtz Foundation Model Initiative for the benefit of the general public.

        Speaker: Dagmar Kainmüller (MDC)
    • 10:40 11:10
      Coffee Break 30m
    • 11:10 12:40
      Thematic Session: Data Acquisiton / Image Format - part III

      Six talks, 15 min. each

      Convener: Christian Schroer (DESY)
      • 11:10
        Estimation of Calving Law Parameters from Satellite Images 15m

        Capturing the calving front motion is critical for simulations of ice shelves and tidewater glaciers. Multiple physical processes, including sliding, water pressure and failure need to be understood to accurately model the front. Calving is particularly challenging due to its discontinuous nature and modellers require more tools to examine it. 
        A common technique for capturing the front in ice simulations is the level-set method. The front is represented implicitly by the zero isoline of a function. The movement of the front is described by an advection equation, where the velocity of the front is a combination of ice velocity and frontal ablation rate.
        We develop methods to estimate parameters of calving laws from satellite images based on inverse level-set problems. The method is adaptable to different forms of calving laws as well as other interface tracking problems and handles temporal sparsity of observations and coupling with an ice sheet model.

        Speaker: Daniel Abele (Alfred-Wegner-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Deutsches Zentrum für Luft- und Raumfahrt)
      • 11:25
        Polarity-JaM: An image analysis toolbox for cell polarity, junction and morphology quantification 15m

        Cell polarity involves the asymmetric distribution of cellular components such as signaling molecules and organelles within a cell, asymmetries of a cell's shape as well as contacts with neighbouring cells. Gradients and mechanical forces often act as global cues that bias cell polarity and orientation, and polarity is coordinated by communication between adjacent cells.

        Advances in fluorescence microscopy combined with deep learning algorithms for image segmentation open up a wealth of possibilities to understand cell polarity behaviour in health and disease. We have therefore developed the open-source package Polarity-JaM, which offers versatile methods for performing reproducible exploratory image analysis. Multi-channel single cell segmentation is performed using a flexible and user-friendly interface to state-of-the-art deep learning algorithms. Interpretable single-cell features are automatically extracted, including cell and organelle orientation, cell-cell contact morphology, signaling molecule gradients, as well as collective orientation, tissue-wide size and shape variation. Circular statistics of cell polarity, including polarity indices, confidence intervals, and circular correlation analysis, can be computed using our web application. We have developed data graphs for comprehensive visualisation of key statistical measures and suggest the use of an adapted polarity index when the expected polarisation direction or the direction of a global cue is known a priori.

        The focus of our analysis is on fluorescence image data from endothelial cells (ECs) and their polarisation behaviour. ECs line the inside of blood vessels and are essential for vessel formation and repair, as well as for various cardiovascular diseases, cancer, and inflammation. However, the general architecture of the software will allow it to be applied to other cell types and image modalities. The Polarity-JaM package integrates these analyses in a reproducible manner with a level of documentation that allows the user to analyse image data accurately and efficiently, see https://polarityjam.readthedocs.io. In addition to a Python-API, we provide a Napari plugin with a graphical user interface and a Web-App for statistical analysis www.polarityjam.com.

        Speaker: Wolfgang Giese (Max Delbrück Center for Molecular Medicine)
      • 11:40
        Developing (semi)automatic analysis pipelines and technological solutions for metadata annotation and management in high-content screening (HCS) bioimaging 15m

        Bioimaging merges microscopy, biology, and computation for single molecule to organism-level study. High-content screening (HCS) automates analysis, aiding in understanding cellular processes and drug development. Managing metadata is a challenge. NFDI4BioImaging aims to enhance FAIR principles in bioimaging. We propose a workflow for zebrafish larvae images, enriching metadata and uploading to OMERO server. Users access and analyze images, supporting reproducibility and collaboration. Integration with IDR enhances data sharing. Our approach streamlines data handling, supporting robust scientific inquiry. Through automated pipelines, we tackle the complexity of metadata, ensuring data integrity and facilitating interdisciplinary collaboration. This workflow not only enhances the efficiency of HCS bioimaging but also contributes to the wider scientific community's efforts to adopt FAIR principles, thereby advancing scientific discovery and innovation.

        Speaker: Riccardo Massei (UFZ)
      • 11:55
        UTILE: A Deep Learning-Driven Imaging Journey Across Dimensions 15m

        Bridging the gap between the novel advancements in deep learning and computer vision and the pressing challenges in energy materials research is as crucial as the individual pursuits in both domains. Particularly relevant are those innovative techniques in energy materials characterization, where rapid progress is essential to address the current global energy challenges. In the "UTILE: Autonomous Image Analysis of Energy Materials" project, we focused on automating the analysis of images related to energy materials using deep learning approaches to accelerate and enhance the work of experimentalists in these specialized fields. This work presents three distinct use-cases, each with a tool developed to process images, segment regions of interest, extract features from segmentation maps, and visualize the outcomes.

        The exploration begins with autonomous 2D Transmission Electron Microscopy (TEM) image analysis for the size and shape assessment of Platinum nanoparticles on Carbon supports, aimed at investigating polymer electrolyte membrane fuel cells (PEMFC). [1,2] Next, we introduce a temporal dimension, developing a tool for detecting oxygen bubbles in optical videos from polymer electrolyte water electrolysers (PEMWE). This tool facilitates time-resolved analysis of bubble dynamics in the water electrolyser's flow field. [3] The final stage of our journey incorporates space as the third dimension, enabling 3D analysis of hydrogen bubbles within vanadium redox flow batteries using synchrotron X-ray tomographs. This software provides swift and reliable analysis of bubble size, shape, and distribution, coupled with 3D visualization and advanced characterization features. [4]

        [1] André Colliard-Granero et al., “Deep learning for the automation of particle analysis in catalyst layers for polymer electrolyte fuel cells,” Nanoscale, vol. 14, no. 1, pp. 10–18.

        [2] André Colliard-Granero et al., “UTILE-Gen: Automated Image Analysis in Nanoscience Using Synthetic Dataset Generator and Deep Learning,” ACS Nanoscience Au, vol. 3, no. 5, pp. 398–407

        [3] André Colliard-Granero et al., “Deep Learning-Enhanced Characterization of Bubble Dynamics in Proton Exchange Membrane Water Electrolyzers,” Physical Chemistry Chemical Physics, 2024, Accepted Manuscript. DOI: https://doi.org/10.1039/D3CP05869G

        [4] André Colliard-Granero et al., “Deep Learning for Autonomous 3D Bubble Analysis of Vanadium Flow Batteries from Synchrotron X-ray Imaging,” under preparation.

        Speaker: Andre Colliard (IEK-13 FZJ)
      • 12:10
        4D-Model-based estimators for real-time respiratory motion during lung cancer radiotherapy treatment 15m

        LINAC-integrated real-time 3D imaging is the missing puzzle needed for safe dose escalation in lung cancer treatments, particularly when tumors are near toxicity-sensitive healthy structures. Machine learning and its development pace are promising in approximating or even predicting the motion patterns in individual patients based on available 2D cineMR scans. Accurate predictons about future positions of targets and structures at risk in several milli-seconds, create an interval for action level decisions e.g. shutting off the beam or even adapting the beam intensities on the fly. However, rigorous evaluation, often reliant on human observers, remains a challenge due to the complexity and amount of multimodal 2D cine, 3D, and 4D scan data of the mediastinal lung region.
        We investigate the accuracy of biomechanically-driven computational patient models to bring structure and coherence in this complex 4D data puzzle, promising to replace extensive delineation or landmark annotation necessity for every single patient. Beginning with ribcage motion, we establish, visualize, and evaluate the digitally reconstructed 4D anatomy.

        Speaker: Kristina Giske (Division of Medical Physics, Computational Patient Models Group, DKFZ)
      • 12:25
        Imaging the impurity distribution in polar ice cores: future pathways in measurement and analysis 15m

        Polar ice cores are invaluable archives of our climate past but retrieving climate signals from the oldest and highly thinned ice remains a challenge. A breakthrough can come from imaging the impurity distribution in ice using laser-ablation inductively-coupled plasma mass spectrometry (LA-ICP-MS), but dedicated expertise in image analysis is urgently needed.
        Here we are illustrating this new inter-disciplinary frontier with some key open questions. To study the ice-impurity interplay in deep ice, we must significantly extend the physical size of the images beyond a few mm2. We show how utilizing inpainting techniques guided by optical data promise upscaling sparse LA-ICP-MS lines to a full comprehensive image. This approach can be extended with a 3D structural model of the ice chemistry. This way we can predict how processes at the crystal scale can affect bulk concentrations measured by melting cm-volumes of ice. Such tools in image analysis with deep learning techniques will also benefit the already wide-spread application of LA-ICP-MS imaging in biogeosciences.

        Speaker: Pascal Dominik Bohleber (AWI)
    • 12:40 12:45
      Closing
      Convener: Christian Schroer (DESY)
    • 12:45 14:20
      Networking Lunch 1h 35m