31 May 2022 to 1 June 2022
Axica
Europe/Berlin timezone

Scientific Programme

Please find here extended information on the program:

  • Keynotes
  • Scientific Talks
  • Additional Program (panel discussion, workshop, awards,...)
  • Poster Session.
  • Keynotes

    • Space-Based Synthetic Aperture Radar (SAR) Imaging for Climate Research and Environmental Monitoring

      Prof. Alberto Moreira, German Aerospace Center (DLR), Karlsruhe Institute of Technology (KIT)

      Space-based Synthetic Aperture Radar (SAR) has been widely used for Earth remote sensing for more than 40 years. SAR is unique in its imaging capability: It provides high-resolution imaging independent from daylight, cloud cover and weather conditions for a multitude of applications ranging from geoscience and climate change research, environmental and Earth system monitoring, 2D and 3D mapping, change detection, 4D mapping (space and time), security-related applications up to planetary exploration. Therefore, it is predestined to monitor dynamic processes on the Earth’s surface in a reliable, continuous and global way. SAR systems have a side-looking imaging geometry and are based on a pulsed radar installed on a moving platform. By means of a coherent processing of the received echo signal, a long synthetic aperture can be formed, which corresponds to a synthetic antenna length of a few hundred meters and even several kilometers in the airborne and spaceborne case, respectively. Since the length of the synthetic aperture increases with the platform height, the spatial resolution becomes independent on the distance to the target, making SAR a unique technique for space-based SAR Earth observation.
      This talk first describes the principles of SAR imaging along with various application examples. Next, advanced techniques for SAR imaging like interferometry, polarimetry, tomography and holography are presented. In combination with the latest digital beamforming imaging techniques, a highly innovative spaceborne SAR mission, called Tandem-L, has been proposed for global observation of dynamic processes on the Earth's surface with hitherto unprecedented quality and resolution. The talk concludes with an outlook on the future of space-based SAR imaging.

    • Structural Biology of SARS-CoV-2: Making the Invisible Enemy Visible

      Dr. Andrea Thorn, Universität Hamburg

      During the COVID-19 pandemic, structural biologists rushed to solve the structures of the 28 proteins encoded by the SARS-CoV-2 genome in order to understand the viral life cycle and to enable structure-based drug design. In addition to the 204 previously solved structures from SARS-CoV-1, over 2000 structures covering 18 of the SARS-CoV-2 viral proteins were released in a span of a few months. These structural models serve as the basis for research to understand how the virus hijacks human cells, for structure-based drug design, and to aid in the development of vaccines. The Coronavirus Structural Task Force [1] rapidly categorized, evaluated and reviewed all of these experimental protein structures in order to help original authors and downstream users, for example Folding@Home, OpenPandemics, the EU JEDI COVID-19 challenge. We also created reviews, illustrations, animations and 3D printable models of the virus from these experimental results, which we distributed via www.insidecorona.net. In the beginning, there were no tenured academics in the Coronavirus Structural Task Force; we were an ad hoc collaboration of 26 researchers across nine time zones, brought together by the desire to fight the pandemic. Still, we were able to rapidly establish a large network of COVID-19 related research, forge friendships and collaborations across national boundaries, spread knowledge about the structural biology of the virus and provide improved models for in-silico drug discovery projects. Now, after more than two years, we have consolidated our collective knowledge about the virus, and can leverage this insight for the question: What is next?

      [1] Croll, T., Diederichs, K., Fischer, F., Fyfe, C., Gao, Y., Horrell, S., Joseph, A., Kandler, L., Kippes, O., Kirsten, F., Müller, K., Nolte, K., Payne, A., Reeves, M.G., Richardson, J., Santoni, G., Stäb, S., Tronrud, D., Williams, C, Thorn, A*. (2021) Making the invisible enemy visible (2021) Nature Structural & Molecular Biology 28, 404–408 https://doi.org/10.1038/s41594-021-00593-7

    • X-roads of Imaging and Data Science: Projects to Accelerate Imaging

      Prof. Wojtek Goscinski, CEO at National Imaging Facility (NIF), Australia

      Medical research is increasingly dependent on the ability to capture, store, process, integrate, and analyse large volumes of data. Moreover, in the foreseeable future, the size of research studies will continue to increase, and the overarching driver is to improve research quality and to increase reproducibility. Large-scale studies require a step change in the way that data is treated.

      The opportunities are accelerating with the common introduction of ML methods, such as convolutional neural networks, the information that can be quantitatively gathered from images will increase, and thus so will the value of imaging within the research lifecycle. ML-based image analysis can extract more information, with less human intervention, across larger study sizes.

      This presentation will provide an overview of a number of Australian research infrastructure initiatives that attempt to bridge the gap between the instrument and computing capability, by providing imaging users with accessible, powerful and easy-to-use environments. I will present initiatives to integrate instruments, manage imaging research data at a national scale, provide crucial data analysis skills and accelerate the adoption of standards and machine learning techniques across areas such as structural biology and neuroimaging, and I will provide an overview of future data strategy being developed across the National Imaging Facility.

  • Scientific Talks

    • Computational Imaging for Art investigation and for Neuroscience

      Prof. Pier Luigi Dragotti, Imperial College London

      In this talk we provide an overview of two research areas where computational imaging is likely to have an impact. We first focus on the heritage sector which is experiencing a digital revolution driven in part by the increasing use of non-invasive, non-destructive imaging techniques. These new imaging methods provide a way to capture information about an entire painting and can give us information about features at or below the surface of the painting. We focus on Macro X-Ray Fluorescence (XRF) scanning which is a technique for the mapping of chemical elements in paintings. After describing in broad terms the working of this device, a method that can process XRF scanning data from paintings is introduced. The results presented show the ability of our method to detect and separate weak signals related to hidden chemical elements in the paintings. We then discuss results on the Leonardo’s “The Virgin of the Rocks” and show that our algorithm is able to reveal, more clearly than ever before, the hidden drawings of a previous composition that Leonardo then abandoned for the painting that we can now see. In the second part of the talk, we focus on two-photon microscopy and neuroscience. Multi-photon microscopy is unparalleled in its ability to image cellular activity and neural circuits, deep in living tissue, at single-cell resolution. In this talk we introduce light-field two-photon microscopy and present a method to localize neurons in 3-D. The method is based on the use of proper sparsity priors, novel optimization strategies and machine learning.

    • Proposal of a Joint Inversion Imaging Approach for the Estimation of Brain Myelin Content using MRI

      Dr. Ravi Dadsena, DZNE

      Magnetic Resonance Imaging (MRI) is a useful method for detecting focal macroscopic tissue abnormalities in the brains of patients with neurodegenerative disorders. A variety of imaging techniques are commonly used to estimate the brain myelin content. Myelin water fraction (MWF) mapping using MRI has enabled researchers to directly examine myelination and demyelination in both developing and diseased brains. T1-, T2-, and T2*-weighted multi-echo data have been proposed to estimate MWF in the human brain. Although, even for the relatively simple two pool signal models, consisting of myelin water and non-myelin associated water, the number of dimensions of the parameter space for acquiring MWF estimates remains high, which makes the parameter estimation challenging. The aim of this research is to improve the accuracy and precision of brain myelin content mapping. Utilizing geophysical joint inversion concepts, we propose a novel joint inversion imaging approach where data from multiple contrasts are combined in a single optimization process. When compared to state-of-the-art methods for MWF estimation, the proposed method is expected to be less biased and less susceptible to noise.

    • Advances in landslide analysis using remote sensing: Results from Multisat4slows project

      Prof. Mahdi Motagh, Helmholtz Centre Potsdam GFZ

      In this presentation we summarize all the achievements that were made within the first year of the Multisat4slows project (Developing multi-scale satellite-based imaging
platform for change detection: application for landslide hazard classification
and early warning service ), financed within the Helmholtz Imaging 2020 call. In particular, we introduce a new reference data base which has been compiled for training, testing and validating machine learning algorithms, discuss example applications on how various parameters extracted from SAR imagery improve change detection related to landslides, elaborate on our experience in using data-driven approaches for event timing detection, and finally introduce the first version of publicly available Data Analytics toolbox for rapid analysis of landslides, where the developed models are deployed.

      Co-authors: Aiym Orynbaikyzy, Daniel Eggert, Simon Plank, Mike Sips, Magdalena Vassileva, Wandi Wang

    • Deep Learning for Lithological Point Cloud Segmentation

      Dr. Ahmed J. Afifi, HZDR

      Deep learning (DL) techniques are now ubiquitous in the field of computer vision and its various applications (classification, detection, and segmentation). Recent developments in 3D sensing technology and low-cost devices facilitate the process of 3D data collection. Light Detection and Ranging (LiDAR) and Structure from Motion (SfM) can rapidly generate centimeter to sub-centimeter resolution 3D point clouds, which are rapidly becoming essential to real-time applications such as autonomous driving and robotics. In Hyper 3D-AI project, we focus on solving the classification and segmentation tasks of multi-sensor 3D data using deep learning, i.e. the proposed methods would utilize different information that are acquired by the sensors to achieve a segmentation that is sensitive to geometric, textural and spectral attributes. We propose different deep models that can handle 3D data represented as point clouds and train them for the classification and segmentation tasks.

      Authors: Ahmed J M Afifi, Sam T. Thiele, Sandra Lorenz, Richard Gloaguen

    • Exploring Digital Twin of the Earth. Challenges in visualization of kilometer scale Earth System model simulations

      Nikolay Koldunov, AWI

      The Earth's climate is changing, and science should provide better information about future climate change impacts on global and regional level. Current developments in High Performance Computing and Climate modelling, allow us to simulate the coupled Climate System with unprecedented level of detail, using spatial resolution of several kilometres in atmosphere and ocean. Those simulations, called Digital Twins of Earth (DTE), produce very large amounts of data that is hard to visualise and explore. We will discuss opportunities and challenges that come with emergence of DTE, including the need for new visualisation and analysis tools, image based data archiving and distribution as well as use of those data for educational and science communication purposes.

    • Deep-Learning Meets Microbial Live-Cell Imaging: Powerful Analysis Workflows from Annotation to Prediction

      Bastian Wollenhaupt, FZJ and Seiffarth Johannes, FZJ

      Time-lapse microscopy in combination with precisely controllable microfluidic lab-on-a-chip systems allows observing the emergence of microbial populations starting from one single cell. Live-cell imaging is thus a powerful tool for studying heterogeneity of cell growth, morphological development, or cell-to-cell interaction. Such insights are door openers for predicting strain performances and engineering synthetic microbial cultures for industrial application. The key barrier to extracting spatiotemporal information from live-cell imaging experiments comprising hundreds of time-lapse image stacks each is an accurate and automated cell segmentation.

      New deep-learning methods offer high quality and throughput for image processing of microbial time-lapses, scaling to hundreds of thousands of cells. To achieve top accuracy beyond specific imaging modalities and microbial morphologies, however, segmentation models require an enormous amount of ground truth data for training. Also, available approaches are difficult to handle due to heavy software and hardware dependencies. Within the SATOMI project, we developed two open access tools, microbeSEG and ObiWan-Microbi, that cover the full workflow of deep-learning methods in microfluidic live-cell analysis: microbeSEG facilitates the creation and management of ground-truth datasets and training of deep-learning segmentation, while ObiWan-Microbi provides a suite of semi-automated annotation tools and unlocks deep-learning execution in the cloud.

      In our talk, we start with outlining the challenges arising in the analysis of large amounts of microfluidic image data from the different “user” and “developer” perspectives. We then present our deep-learning workflow for creating custom cell segmentation, which we showcase in action with a challenging example of a microbial consortia time-lapse imagery yielding results that were previously impossible to extract.
      Although developed in the field of microbial live-cell imaging, the workflow is generally applicable whenever segmentation and image processing is employed.

    • Using k-Means to identify Coronal Holes on AIA/SDO Images

      Dr. Stefano Bianco, GFZ

      Through its magnetic activity, the Sun governs the conditions in Earth's vicinity, creating space weather events, which have drastic effects on our space- and ground-based technology. The identification of the coronal Holes on the Sun as one of the source regions of the solar wind is therefore crucial to achieve predictive capabilities. In this study, we used an unsupervised machine learning method, k-means, to pixel-wise cluster the passband images of the Sun in the wavelenghts 171 Å, 193 Å, and 211 Å. Our results show that the pixel-wise k-means clustering together with systematic pre- and post-processing steps provides compatible results with those from complex methods, such as CNNs. More importantly, our study shows that there is a need for a CH database that a consensus about the CH boundaries are reached by observers independently. This database then can be used as the "ground truth", when using a supervised method or just to evaluate the goodness of the models.

      Co-authors: Fadil Inceoglu, Stephan Heinemann, Yuri Shprits

    • Metrics Reloaded – A new recommendation framework for biomedical image analysis validation

      Dr. Paul Jäger, Helmholtz Imaging

      Meaningful performance assessment of image analysis algorithms depends on objective and transparent performance metrics. Despite major shortcomings in the current state of the art, so far only limited attention has been paid to practical pitfalls associated with the use of particular metrics for an image analysis task. Therefore, a number of international initiatives have collaborated to provide researchers with guidance and tools for selecting performance metrics in a problem-aware manner. Based on input from experts from more than 60 institutions worldwide, we believe our metric recommendation framework to be useful to the community and to enhance the quality of image analysis algorithm validation.

    • Multi-beam X-ray ptychography using coded probes

      Dr. Mikhail Lyubomirskiy, DESY

      Ptychography is a high-resolution imaging technique known for being capable of reaching sub-10 nm resolution. A coherent beam is required to reach this resolution. Currently, the coherent fraction of the beam for third-generation synchrotron sources is only 10%. Thus, ptychography needs to isolate the coherent flux. Consequently, more than 90% of the incoming X-rays are wasted. This reduces the total flux on the sample, which leads to an increased scan time or reduced spatial resolution due to a lower signal-to-noise ratio. The need to choose between high resolution and a large field of view is often a show stopper for many experiments. This problem is especially dire for biological samples, where a single specimen often is not representative of a population. The latest advances in ptychographic algorithms provide the possibility of introducing mutually incoherent modes. This was adapted in recent developments for visible light microscopy with multiple independent beams. The sample is scanned simultaneously, not by one beam but by many, which can be mutually incoherent from each other. Each mode in the algorithm is used to reconstruct one beam and a corresponding region of the sample, expanding the scan area by the number of beams. We have successfully implemented this technique in the X-ray regime using up to 6 beams in parallel. Each beam was uniquely coded, which provided robust disentangling of the diffracted signal from different sample areas and thus artifact-free reconstructed object.

  • Additional Program

    • Panel discussion: Challenges for imaging. How do we deal with them?

      Moderator: PD Dr. Wolfgang zu Castell

      Guests:
      Prof. Otmar D. Wiestler (Helmholtz Association),
      Dr. Holger Becker (Member of the German Bundestag),
      Dr. Dagmar Kainmüller (MDC),
      Dr. Andrea Thorn (University Hamburg)

    • Opening Remarks: Mind the gap on bringing research and politics closer together

      Dr. Holger Becker, Member of the German Bundestag

    • Intro: Helmholtz Imaging

      Prof. Christian Schroer, DESY

    • Best Scientific Image Contest 2022

      Exhebition and Awards

      Jury Awards

      Public Choice Awards

      Participants Coice Awards: At the conference you have the opportunity to select 3 additional winners from the "20 Favorites of the Jury" by your vote. Walk around, enjoy the aesthetics and the scientific content pf the exhibition and choose your personal favorite with the stickers that you recieved with the booklet when registering.

    • Workshop: Looking to the Future

      Moderators: Christian Schroer, Sara Krause-Solberg

      Three talks on future topics related to imaging that should trigger the discussion to gather voices from the imaging community. Each talk of 15min is followed by a discussion of 10min.

      Next Big Challenges for Imaging Science

      Pier Luigi Dragotti, Imperial College London

      Big Data & Scalability

      Prof. Volker Guelzow, DESY

      AI for Imaging

      Dagmar Kainmüller, MDC

  • Poster Session

    • UTILE project: Autonomous Image Analysis to Accelerate the Discovery and Integration of Energy Materials

      Andre Colliard Granero, IEK-13, FZJ

      UTILE will first establish a comprehensive domain ontology for multi-scale characterization of electrochemical devices based upon EMMO and CHADA - from nanoparticles through ink agglomerates, interfaces, and nanopores, all the way up to the cell and stack levels. To accomplish this goal, a collaborative imaging platform with semantic search capabilities will be developed and implemented, connecting all partners and allowing for automated image labelling, the collection of standardised full metadata records, image quality control, pre-processing, and the generation of unique data identifiers for traceability. This platform will first make use of collected imaging data over time and will assist the development of research methods based on open data and digitalization of energy materials. The annotated imaging datasets will be utilized to build data-driven models for autonomous characterisation using end-to-end deep learning-based pipelines. The latter will use recent breakthroughs in deep learning techniques based on Convolutional Neural Networks for a range of analytical tasks such as segmentation, detection, and super-resolution microscopy. Unlike traditional approaches, which need several pre-processing steps to analyse regions of interest (ROIs), deep learning models eliminate human interaction entirely by extracting the information content directly from the images. The robustness of deep learning models will be evaluated using a number of performance measures that, unlike the standard technique, integrate well-defined quantification criteria. On the basis of collected real-world and synthetic images, robust and generalizable deep learning models will be trained to rapidly extract complex structural elements (e.g., particle size and shape, pore space geometry, fractures) and discover correlations with the recorded metadata from previous step (structure–property–performance). To automate population, size, and shape distribution analyses, systematic pipelines will be constructed based on instance segmentation of ROIs such as nanoparticles, nanopores, gas bubbles, or fractures. The technique involves supervised, self-supervised, and generative model learning for the task of instance segmentation of ROIs, followed by machine vision-based programmed analysis, statistics, and visualisation of the predicted ROI. Additionally, we will use advanced transfer learning methods based on multi-task and meta-learning to ensure the robust and rapid transfer of pre-trained models to smaller target datasets with unknown materials. To accomplish this goal, end-to-end training pipelines for evaluating structure-property relationships will be constructed utilising multi-task learning algorithms and physics-informed deep learning. Regarding the efficacy of multi-task learning algorithms in learning across several domains with shared structures, this task should allow direct prediction of structural performance and lifetime characteristics as an output of the deep learning model.

    • PtyNAMi - Ptychographic Nano-Analytical Microscope

      Andreas Schropp, Center for X-ray and Nano Science CXNS, DESY

      Authors: Andreas Schropp, Ralph Döhrmann, Jan Garrevoet, Christian Schroer

      Ptychographic X-ray imaging at the highest spatial resolution requires an optimal experimental environment, providing a high coherent flux, excellent mechanical stability and a low background in the measured data. This requires, for example, a stable performance of all optical components along the entire beam path, high temperature stability, a robust sample and optics tracking system, and a scatter-free environment. The Ptychographic Nano-Analytical Microscope (PtyNAMi) has been developed by the group 'X-ray Nanoscience and X-ray Optics' and is installed in the nanohutch of beamline P06 at PETRA III (DESY). The microscope was optimized in view of these enhanced experimental requirements and is designed to create nanofocused X-ray beams with 50 nm and even smaller, providing local elemental, chemical and structural information of a specimen simultaneously. Here, we present different high-resolution X-ray imaging results obtained at this instrument.

    • Towards distributed imaging for autonomous seismic surveys in multi-agent networks

      Ban-Sok Shin, DLR

      Authors: Ban-Sok Shin, Dmitriy Shutin

      Distributed imaging is of high relevance for autonomous seismic surveys conducted by a multi-agent network. Such systems play an important role for future space missions to image near-surface anomalies such as lava tubes. The goal of distributed imaging is to obtain an estimate of the subsurface image at each agent locally via cooperation within the network. The image is resolved with respect to a physical parameter of the subsurface such as seismic wave speed or material density. We proposed a distributed version of the full waveform inversion (FWI) that relies on the adapt-then-combine technique. FWI is a geophyiscal imaging technique that obtains high resolution images by exploiting the wave equation. In the proposed method, gradients based on the local error residual and local subsurface images are exchanged among neighboring agents. The proposed method shows that images close the global subsurface image can be obtained at each agent in the network via such cooperation.

    • In situ nanotomographic X-ray imaging of magnesium implant degradation

      Berit Zeller-Plumhoff, Hereon

    • Correlative Microscopy of the Rhizosphere - Progress and Challenges

      Chaturanga D. Bandara, UFZ

      ProVis – Centre for Chemical Microscopy, Department of Isotope biogeochemistry, Helmhotlz Centre for Environmental Research- UFZ, Leipzig, 04317, Germany

      Authors: Bandara, C.D., Schmidt, M., Davoudpour, Y., Stryhanyuk, H., Richnow, H.H., Musat, N.

      Here we present a combinatory approach to identifying the spatial distribution of bacteria in the rhizosphere. We have embedded the soil samples in LR white resin, and the soil structure was characterised using micro CT. Bacteria in embedded soil were identified by the Fluorescent in-situ hybridisation and characterised by fluorescent microscopy. Samples were correlatively characterised by Helium ion microscopy, Scanning electron microscopy-EDX, ToFSIMS, NanoSIMS, and confocal Raman spectroscopy for a comprehensive spatial and chemical analysis of the microenvironment of bacteria. Acquired micrographs of different imaging modalities were registered using the Correlia plugin in Fiji image processing software. Further, we have adapted the deep learning image segmentation approach to identify roots in the micro CT data. We further discuss the current challenges in 3D-2D image registration for comprehensive data analysis related to microbial distribution in the rhizosphere.

    • Initial results using neuroimaging features to predict the genetic risk of RLS

      Federico Raimondo, INM-7, FZJ

      Authors: Federico Raimondo, Konrad Oexle, Vera Komeyer, Jan Kasper, Sabrina Primus, Juliane Winkelmann, Simon Eickhoff, Kaustubh Patil

      Restless Legs Syndrome (RLS) is an urge to move the legs when at rest, resulting in severe insomnia and subsequent depression. While the ING at HMGU has deciphered a substantial part of the polygenic basis of RLS, there is still a lack of phenotypic and brain-derived biomarkers. Using the UKB database, we extracted gray matter volume, fractional amplitude of low-frequency fluctuation, global and local correlation from structural and functional MRI. From the genetic data, we extracted the polygenic risk score (PRS) of RLS. We then created two machine learning models aimed to predict PRS from the brain-derived features.
      So far, none of the learning algorithms was able to predict the PRS from the neuroimaging features. However, several relevant brain-derived markers have not been used yet. Despite preliminary negative results, the technical basis for large-scale analysis including neuroimaging and genetic data from the UKB database is now ready to be used for further enquiries.

    • Computational and machine learning approaches for the genetic analysis of histological images

      Francesco Paolo Casale, Helmholtz Munich

    • Federated Learning for Data and Model Privacy in Cloud

      Hussain Ahmad Madni, University of Udine

    • Mapping Arctic treeline vegetation using LiDAR data in the Mackenzie Delta area, Canada

      Inge Günberg, AWI

    • Multi-axes fusing for uncertainty estimation and improved segmentation of biodegradable bone implants in SRµCTs

      Ivo Matteo Baltruschat, Research and Innovation in Scientific Computing, DESY

      Authors: Ivo-Matteo Baltruschat, Hanna Ćwieka, Diana Krüger, Berit Zeller-Plumhoff, Frank Schluenzen, Regine Willumeit-Römer, Julian Moosmann, Philipp Heuser

      Segmentation of synchrotron microtomograms (SRµCTs) is very challenging, both for algorithmic solutions and for domain experts. To characterize biodegradable bone implants based on automatic segmentation, DESY and Hereon investigated the scaling of the 2D U-net for high-resolution volumes using three key model hyperparameters (i.e., model width, depth, and input size). To utilize the 3D information from the SRµCTs, the prediction is made from multiple viewing directions and then fused by a voting method. In the evaluation, we compare the results by intersection over union (IoU). In summary, combined scaling of the U-net (i.e., all three model parameters are optimized together) and multi-axis prediction fusing with soft voting yields the highest IoU for the least abundant class. The multi-axes prediction allows the computation of uncertainty estimates with very low additional computational cost. Overall, the time needed to segment a single 3D SRµCT is reduced by an order of magnitude.

    • MRI reveals brain ventricle expansion in pediatric patients with acute disseminated encephalomyelitis

      Jason Millward, MDC

    • Performance Portable Reconstruction of Ptychography Data with the Alpaka C++ Library

      Jiri Vyskocil, CASUS, HZDR

      Authors: Simeon Ehrig, Jiri Vyskocil, Silvio Achilles, Nico Hoffmann, Andreas Schropp, Christian Schroer, Dieter Weber, Alexander Clausen, Oleh Melnyk, Knut Müller-Caspary, Michael Bussmann

      Ptychography is a computational imaging method used to numerically retrieve the projection of an object from a set of measured diffraction patterns. Each diffraction pattern represents a partially overlapping area of the object. The corresponding inverse problem, i.e. the image reconstruction, can be solved by projection-based or gradient-based algorithms. However, existing implementations are usually optimized for a specific system and therefore difficult to port to new systems. To solve this problem, we will be introducing the alpaka library with a generic C++ interface to implement an algorithm one time and execute it on different target platforms, like CPUs and GPUs from different vendors. First, we ported an existing algorithm, implemented in CUDA C++, to alpaka to demonstrate the workflow and advantages. Then, we implemented another algorithm from scratch to obtain the software requirements for an easy and fast development cycle of alpaka based image reconstruction applications.

    • Nuclear Verification from Space - Change Detection using Machine Learning and Sentinel-2 Imagery

      Lisa Beumer, FZJ

      Author: Lisa Beumer, Irmgard Niemeyer

      Within the framework of the Nuclear Non-Proliferation Treaty, a system of safeguards was established under the authority of the International Atomic Energy Agency (IAEA) to prevent the proliferation of weapons of mass destruction. Satellite imagery is an integral part of the IAEA's monitoring and verification efforts as it can be used to monitor changes and activities in nuclear facilities worldwide. The EO big data with its sheer volume, its complexity and heterogeneity is accompanied by a number of challenges. Image-processing algorithms based on Data Science methods have achieved stunning achievements leading to a more effective and efficient data exploitation. However, turning EO data into valuable safeguards relevant information is an ongoing challenge due to the lack of labeled training data. In this work, we propose a method based on transfer learning to deploy a CNN that can effectively extract contextual image features, which are in turn used for change detection.

    • MemBrain -- Analysis of Membranes in Cryo-Electron Tomograms

      Lorenz Lamm, Helmholtz Munich

    • Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings

      Lukas Klein; Helmholtz Imaging

      Authors: Lukas Klein, João Carvalho, Mennatallah El-Assady, Joachim Buhmann, Paolo Penna, Paul Jäger

      Explainable AI aims to render model behavior understandable by humans, which can be seen as an intermediate step in extracting causal relations from correlative patterns. Due to the high risk of possible fatal decisions in image-based clinical diagnostics, it is necessary to integrate explainable AI into these safety-critical systems. Current explanatory methods typically assign attribution scores to pixel regions in the input image, indicating their importance for a model's decision. However, they fall short when explaining why a visual feature is used. We propose a framework that utilizes interpretable disentangled representations for downstream-task prediction. Through visualizing the disentangled representations, we enable experts to investigate possible causation effects. Additionally, we deploy a multi-path attribution mapping for enriching and validating explanations. We demonstrate the effectiveness of our approach on a synthetic benchmark suite and two medical datasets.

    • 3D Imaging Modalities and Algorithms at the GINIX X-ray microscope

      Markus Osterhoff, Röntgenphysik Göttingen

    • Improving convolutional neural network comprehensibility via interactive visualization of relevance maps: evaluation in Alzheimer’s disease

      Martin Dyrba, DZNE

    • Partially coherent simulations of PETRA IV Beamlines

      Martin Seyrich, Center for X-ray and Nano Science CXNS, DESY

    • Multi-dimensional imaging of solar cells with scanning X-ray microscopy

      Michael Stuckelberger,DESY

    • Automatized Diffraction Pattern Recognition for Scanning Surface X-Ray Crystallography of Polycrystalline Materials

      Nastasia Mukharamova, DESY

    • ObiWan-Microbi & MicrobeSeg: Deep-Learning Tools for Microbial Image Analysis

      Oliver Neumann, KIT

    • Automatic Recognition of Dead Sea Geomorphological Features

      Osama Alrabayah, GEOMAR

    • Object detection in dehazed Optical and Infrared Images

      Oscar Hernan Ramírez Agudelo, DLR

      Authors: Oscar Hernan Ramirez-Agudelo, Borja Carrillo-Perez, Jacob Estevam Schmiedt, Matthias Mischung, Enno Peters

      Images captured in the presence of smoke and fog will often suffer from bad visibility. In such scenario, the limited perception poses a massive problem for monitoring infrastructures. In the event of a disaster, or even an attack, smoke and fog might hinder emergency services such as fire-services. The Institutes for the Protection of Terrestrial and Maritime Infrastructures, subscribed to the German Aerospace Center (DLR), are dedicated to develop concepts and technologies to help to improve the safety and security of critical maritime and terrestrial infrastructures.

      In this talk, first the concept of efficient image dehazing will be exploited. Second, it will be shown how state-of-the-art algorithms in object detection improve their performance by using dehazed images when smoke and fog are present. Finally, this two-stage approach has been applied to optical and infrared images, showing the robustness and possible application of dehazing in this field.

    • The Hidden Image of Thawing Permafrost: project overview and first results of the radar polarimetric analysis

      Paloma Saporta, DLR

      Permafrost in the Northern hemisphere is rapidly warming in the context of climate change. The degradations associated to this trend pose several threats, locally to landscapes, infrastructures and settlements, and globally as permafrost is a potential source of greenhouse gazes in the carbon cycle. Different remote sensing methods can be used to monitor permafrost, most of them relying on surface observables which are then related to the ground thermal state. The Hidden Image of Thawing Permafrost (HIT Permafrost) project however aims at mapping directly subsurface properties using remote sensing data. We are aiming in particular at estimating soil properties such as ground ice content, layer composition and frozen versus non-frozen state of the soil in the sense of a vertical layering. To achieve this, the project relies on expert knowledge, ground measurements and remote sensing data combined using innovative techniques and models. The data has been collected over a particular test site, Trail Valley Creek, located in the Mackenzie River Delta (Canada). Airborne campaigns were performed simultaneously by the Alfred Wegener Institute (AWI) and the German Aerospace Center (DLR) in summer 2019 and winter 2019, providing a unique dataset of respectively optical photographs and LIDAR, and multimodal Synthetic Aperture Radar. We will give an overview of the HIT Permafrost project and present some first results of the radar polarimetric analysis.

    • Helmholtz Imaging Modalities

      Philipp Heuser, Helmholtz Imaging

      Authors: Philipp Heuser, David Schwartz, Franz Rhee

      Scientists at the Helmholtz Association develop and apply a broad range of imaging modalities. The borders of hard- and software solutions for imaging are pushed to their limits and the respective scientists conduct cutting edge research across the scales. Imaging is the crucial method to study processes within atoms, molecules, organisms, ecosystems, and the universe. At Helmholtz Imaging Modalities we collect all the modalities, instruments, applications, and the respective scientific and engineering experts on one site.
      Helmholtz Imaging Modalities is the heart of the Helmholtz Imaging Network. Get to know the imaging experts from the Helmholtz Association, explore the amazing portfolio of cutting edge instruments from satellites to synchrotrons. Helmholtz Imaging Modalities facilitates finding the right instruments, partners and collaborators for innovative projects, with just the right complementary expertise necessary to conduct research for grand challenges.

    • AI-based Evaluation of cardiac real-time MRI with congenital heart disease

      Philipp Rosauer, DLR

      Authors: Anja Bach, Alex Hoff, Philipp Rosauer, Darius A. Gerlach, Wadim Koslow, Jens Tank

      Cardiac MRI is an important diagnostic tool in heart diseases for assessment of vital parameters. Current methods are limited to patients with regular heart beats and breath hold capability. New teal-time MRI allows the recording of 2D slices of the pumping human heart during spontaneous breathing. Such high frame rates yield large amounts of data at different breath and cardiac phase. We introduce the key challenges in automated evaluations of this data especially in congenital heart diseases. These evaluations can be used to investigate the effect of respiration on parameters such as blood flow and stroke volume. The heart is acquired slice by slice, thus the cardiac cycles have to be synchronized to model a 3D heart model for in-phase value estimations. Another challenge is the anatomical segmentation. We present our workflow to tackle those challenges, show and discuss first results. One result is a breath- and cardiac cycle synchronized segmented univentricular heart in 4D.

    • How to classify single white blood cells in unseen data from different domains?

      Raheleh Salehi, Politecnico Di Torino

    • Raw Image space improves Single-Cells Classification in Acute Myeloid Leukemia

      Rao Muhammad Umer, Institute of Computational Biology (ICB), Helmholtz Munich

    • The Space-Filling Curve Needle

      Sandro Elsweijer, German Aerospace Center (DLR)

      Adaptive Mesh Refinement

    • Simultaneous Mapping of Magnetic and Atomic Structure of Ferromagnets using Ltz-4D-STEM

      Sangjun Kang, KIT

      Authors: Sangjun Kang, Xiaoke Mu, Di Wang, Arnaud Caron, Christian Minnert, Karsten Durst, Christian Kübel

      Ferromagnetic materials consist of a domain structure where the magnetic fields of dipoles are grouped together and aligned to minimize magnetostatic energy. The formation of the magnetic domains is associated with magnetic anisotropies which tun the local configuration of spins. According to magneto-elastic coupling, magnetic anisotropies are coupled to local atomic displacement 1. Therefore, a strain field within a material induces rearrangement of magnetic domains giving rise to the complicated magnetic responses of magnets [2]. For soft ferromagnetic metallic glass (SFMG), which originally possess an isotropic atomic structure, the magnetic domain structure is dominated by the development of structural anisotropies 1. The magnetic domain structure of SFMGs is thus extremely sensitive to the local deviatoric distortion at the atomic scale [3]. Highly sensitive measurement of strain and magnetic fields in SFMGs with nanometer resolution is highly desired to understand the correlation of the multi fields and new material designs. Here, we developed Lorentz 4-dimensional scanning transmission electron microscopy (Ltz 4D-STEM) for correlative mapping of the magnetic structure, elastic energy, strain field, and density of a soft magnetic metallic glass. A quasi-parallel electron probe is focused to ~10 nm diameter on the soft magnetic TEM sample under field-free conditions as illustrated in Figure 1A. Electron diffraction patterns are acquired from the nano-volume at each scan position during stepwise scanning of the probe over the area of interest. Diffraction patterns are shown in the background. As shown in Figure 1B, measuring the center positions of each local diffraction pattern provides a magnetic domain map and the strength and orientation of elliptical deviation of each local diffraction pattern provides an elastic energy map and strain fields. Quantifying the area encircled by the 1st ring of each diffraction pattern provides an atomic packing density map [4]. Thus, this method simultaneously provides a correlative visualization of multi-field and atomic structure information at a pixel level.
      Figure 1C shows typical results from the Lorentz 4D-STEM. The magnetic field 𝐵⃗, first principal strain (𝜀), and relative atomic packing density ( ∆ρ ) are simultaneously measured from a deformed soft magnetic Fe85.2Si0.5B9.5P4Cu0.8 MG ribbon providing pixel level correlation.

      Keywords: Ltz-4D-STEM, magnetic domain structure, Shear band; Strain field, Soft magnetic amorphous alloy

      Reference
      1 Shen et al, Nature Communications 9, (4414), 2018
      [2] Lei, N. et al. Nat. Commun. 4, 1378 (2013).
      [3] Pascarelli, S. et al.. Phys. Rev. Lett. 99, 237204 (2007).
      [4] Kang et al, Submitted, 2022.

      Acknowledgment: Financial support from Deutsche Forschungsgemeinschaft (DFG) for grant MU4276/1-1.

      Figure 1. Schematic illustration of Lorentz 4D-STEM. (A) The electron probe is focused on the soft
      magnetic TEM sample under field-free conditions. Spatially-resolved diffraction patterns are collected
      during scanning over a shear band in a deformed metallic glass. (B) Data processing: the center of mass
      (CoM) of a direct beam measures the momentum transfer in the diffraction pattern by the magnetic field
      (Lorentz forces) inside of the sample under the probe positions. The principal strains (𝑃1⃗⃗⃗⃗⃗ and 𝑃2⃗⃗⃗⃗⃗) are
      calculated from the elliptic distortion of the diffraction ring from a perfect circle. The local atomic
      packing density is quantified by the area encircled by the 1st ring of each diffraction pattern. (C)
      Obtained data, magnetic field 𝐵⃗, first principal strain (𝑃1⃗⃗⃗⃗⃗), and relative atomic packing density (∆ρ)
      from a deformed soft magnetic Fe85.2Si0.5B9.5P4Cu0.8 MG ribbon.

    • HistAuGAN: Style Transfer as Stain Augmentation Technique in Histopathology

      Sophia Wagner, Helmholtz AI

    • Neuroimaging-Genetics bridge for better understanding of human brain organization

      Talip Yasir Demirtas, INM-7, FZJ

      Authors: Talip Yasir Demirtas, Leonard Sasse, Ecehan Abdik, Amir Omidvarnia, Tunahan Cakir, Federico Raimondo, Kaustubh R. Patil

      Magnetic Resonance Imaging (MRI) has promising potential for clinical translation due to its non-invasive nature and high spatial resolution. MRI offers structural and functional measurement modalities which capture the fundamental principles of human brain organization and variability in neurological disorders. Combining information from the MRI-derived phenomics and DNA-derived transcriptomics can provide a holistic view of human brain organization and deeper insights into disease pathologies.

      Here, we aimed to develop a Python package called nimgen to bridge MRI organization and gene expression data. The package allows extracting gene expression levels for parcellations commonly used in the neuroimaging community leveraging the 62,000 probes from the Allen Human Brain Atlas (AHBA) [1] using the abagen package [2]. Then each gene’s expression levels can be correlated to an MRI marker of interest. False discovery rate control is applied resulting in a statistically significantly correlated subset of genes. With the assistance of gene enrichment analysis tools (i.e. WebGestalt, [3]), the significant genes are examined to obtain higher-order biological processes. However, since increasing the number of parcels can inflate statistical significance, nimgen provides a permutation-testing approach based on BrainSMASH [4]. An efficient, reproducible, and rapid pipeline with several MRI markers may be constructed due to the nimgen package's modular design and functionality. Besides, this user-friendly package allows researchers to rapidly analyze NIfTI files with only a basic computational background and minimal effort. We hope that marker-specific candidate gene sets found by the nimgen package will help to better understand genetic makeup of the human brain and develop strategies for the diagnosis and treatment of diseases. Our codes are publicly available at https://github.com/juaml/nimgen. With the flexibility this software provides, we aim to create a comprehensive mapping of biological processes based on transcriptome and MRI measurements, leveraging state-of-the-art neuroimaging data repositories such as the Human Connectome Project (HCP) [5] S1200 release.

      Keywords: Gene Expression, Bioinformatics, Neuroimaging

      References:
      [1] E. H. Shen, C. C. Overly, and A. R. Jones, “The Allen Human Brain Atlas: comprehensive gene expression mapping of the human brain,” Trends Neurosci., vol. 35, no. 12, pp. 711–714, 2012.
      [2] R. D. Markello, A. Arnatkeviciute, J.-B. Poline, B. D. Fulcher, A. Fornito, and B. Misic, “Standardizing workflows in imaging transcriptomics with the abagen toolbox,” Elife, vol. 10, p. e72129, 2021.
      [3] Y. Liao, J. Wang, E. J. Jaehnig, Z. Shi, and B. Zhang, “WebGestalt 2019: gene set analysis toolkit with revamped UIs and APIs,” Nucleic Acids Res., vol. 47, no. W1, pp. W199–W205, 2019.
      [4] J. B. Burt, M. Helmer, M. Shinn, A. Anticevic, and J. D. Murray, “Generative modeling of brain maps with spatial autocorrelation,” NeuroImage, vol. 220, p. 117038, Oct. 2020, doi: 10.1016/j.neuroimage.2020.117038.
      [5] D. C. Van Essen et al., “The Human Connectome Project: a data acquisition perspective,” Neuroimage, vol. 62, no. 4, pp. 2222–2231, 2012.

    • DELAD: Deep Landweber-guided deconvolution with Hessian and sparse prior

      Tomas Chobola, Helmholtz Munich

      Authors: Tomas Chobola, Tingying Peng, Jan Taucher, Anton Theileis

      We present a lightweight model for non-blind image deconvolution that incorporates the classic iterative method into a deep learning application. Instead of using large over-parameterized generative networks to create sharp picture representations, we build our network based on the iterative Landweber deconvolution algorithm, which is integrated with trainable convolutional layers to enhance the recovered image structures and details. Additional to the data fidelity term, we also add Hessian and sparse constraints as regularization terms to improve the image reconstruction. Our proposed model is self-supervised and converges to a solution based purely on the input blurred image and respective blur kernel without the requirement of any pre-training. We evaluate our technique using standard computer vision benchmarking datasets as well as real microscope images obtained by our unique underwater optical equipment, demonstrating the capabilities of our model in a real-world application.

    • BaSiCPy: a napari plugin for microscopy illumination correction

      Tingying Peng, Helmholtz Munich

      Authors: Timothy Morello, Tingying Peng, Nick Schaub, Yohsuke Fukai, Yu Liu

      Due to the inherent imperfections in the optical path, microscopy images, particularly fluorescence microscopy images, are often distorted by uneven illumination and hence have spurious intensity variation, also known as shading or vignetting effect. We introduce BaSiC, an image correction method based on low-rank and sparse decomposition, which can be used to correct both spatial uneven illumination and temporal background bleaching. Currently, we are funded by Chan Zuckerberg initiative (CZI) to develop BaSiCPy, an Python implementation of BaSiC as a Napari Plugin. Compared to original BaSiC which could only run on CPUs, our current BaSiCPy builds upon the Jax library and can also run on GPUs and TPUs, giving it a speed boost and making it more suited for large-scale image processing. We anticipate that presenting BaSiCPy at the Helmholtz Imaging conference will encourage more life scientists to use Napari BaSiCPy and assist them improve quantification on their microscopy images.

    • Material-specific table-top EUV ptychography

      Wilhelm Eschen, Helmholtz Institut Jena

    • DeStripe: A Self2Self Spatio-Spectral Graph Neural Network with Unfolded Hessian for Stripe Artifact Removal in Light-sheet Microscopy

      Yu Liu, Technische Universität Munich