- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
This is our first on-site networking event. We are looking forward to meeting the community, to exchange ideas about current trends and challenges, to share experiences and to understand new results in imaging science.
We invite all imaging enthusiasts and scientists from all Helmholtz research fields who are conducting research on imaging or are applying imaging techniques as users to come to Berlin for the Helmholtz Imaging Conference 2022 on May 31-June 1, 2022. This two-day event emphasizes networking and is a face-to-face meeting. The conference program includes selected talks along the entire imaging pipeline and provides a forum for innovative, cross-domain research projects.
6 good reasons to participate:
Our keynote speaker are:
"Space-Based Synthetic Aperture Radar (SAR) Imaging for Climate Research and Environmental Monitoring"
"Structural Biology of SARS-CoV-2: Making the Invisible Enemy Visible"
"X-roads of Imaging and Data Science: Projects to Accelerate Imaging"
The timetable is tentative.
The call for abstracts is open. Please submit an abstract to give a talk or to present a poster. Submit now!
For more information about Helmholtz Imaging visit our website.
Space-based Synthetic Aperture Radar (SAR) has been widely used for Earth remote sensing for more than 40 years. SAR is unique in its imaging capability: It provides high-resolution imaging independent from daylight, cloud cover and weather conditions for a multitude of applications ranging from geoscience and climate change research, environmental and Earth system monitoring, 2D and 3D mapping, change detection, 4D mapping (space and time), security-related applications up to planetary exploration. Therefore, it is predestined to monitor dynamic processes on the Earth’s surface in a reliable, continuous and global way. SAR systems have a side-looking imaging geometry and are based on a pulsed radar installed on a moving platform. By means of a coherent processing of the received echo signal, a long synthetic aperture can be formed, which corresponds to a synthetic antenna length of a few hundred meters and even several kilometers in the airborne and spaceborne case, respectively. Since the length of the synthetic aperture increases with the platform height, the spatial resolution becomes independent on the distance to the target, making SAR a unique technique for space-based SAR Earth observation.
This talk first describes the principles of SAR imaging along with various application examples. Next, advanced techniques for SAR imaging like interferometry, polarimetry, tomography and holography are presented. In combination with the latest digital beamforming imaging techniques, a highly innovative spaceborne SAR mission, called Tandem-L, has been proposed for global observation of dynamic processes on the Earth's surface with hitherto unprecedented quality and resolution. The talk concludes with an outlook on the future of space-based SAR imaging.
In this talk we provide an overview of two research areas where computational imaging is likely to have an impact. We first focus on the heritage sector which is experiencing a digital revolution driven in part by the increasing use of non-invasive, non-destructive imaging techniques. These new imaging methods provide a way to capture information about an entire painting and can give us information about features at or below the surface of the painting. We focus on Macro X-Ray Fluorescence (XRF) scanning which is a technique for the mapping of chemical elements in paintings. After describing in broad terms the working of this device, a method that can process XRF scanning data from paintings is introduced. The results presented show the ability of our method to detect and separate weak signals related to hidden chemical elements in the paintings. We then discuss results on the Leonardo’s “The Virgin of the Rocks” and show that our algorithm is able to reveal, more clearly than ever before, the hidden drawings of a previous composition that Leonardo then abandoned for the painting that we can now see. In the second part of the talk, we focus on two-photon microscopy and neuroscience. Multi-photon microscopy is unparalleled in its ability to image cellular activity and neural circuits, deep in living tissue, at single-cell resolution. In this talk we introduce light-field two-photon microscopy and present a method to localize neurons in 3-D. The method is based on the use of proper sparsity priors, novel optimization strategies and machine learning.
Moderator: PD Dr. Wolfgang zu Castell
Guests:
Prof. Otmar D. Wiestler (Helmholtz Association),
Dr. Holger Becker (Member of the German Bundestag),
Dr. Dagmar Kainmüller (MDC),
Dr. Andrea Thorn (University Hamburg)
During the COVID-19 pandemic, structural biologists rushed to solve the structures of the 28 proteins encoded by the SARS-CoV-2 genome in order to understand the viral life cycle and to enable structure-based drug design. In addition to the 204 previously solved structures from SARS-CoV-1, over 2000 structures covering 18 of the SARS-CoV-2 viral proteins were released in a span of a few months. These structural models serve as the basis for research to understand how the virus hijacks human cells, for structure-based drug design, and to aid in the development of vaccines. The Coronavirus Structural Task Force [1] rapidly categorized, evaluated and reviewed all of these experimental protein structures in order to help original authors and downstream users, for example Folding@Home, OpenPandemics, the EU JEDI COVID-19 challenge. We also created reviews, illustrations, animations and 3D printable models of the virus from these experimental results, which we distributed via www.insidecorona.net. In the beginning, there were no tenured academics in the Coronavirus Structural Task Force; we were an ad hoc collaboration of 26 researchers across nine time zones, brought together by the desire to fight the pandemic. Still, we were able to rapidly establish a large network of COVID-19 related research, forge friendships and collaborations across national boundaries, spread knowledge about the structural biology of the virus and provide improved models for in-silico drug discovery projects. Now, after more than two years, we have consolidated our collective knowledge about the virus, and can leverage this insight for the question: What is next?
[1] Croll, T., Diederichs, K., Fischer, F., Fyfe, C., Gao, Y., Horrell, S., Joseph, A., Kandler, L., Kippes, O., Kirsten, F., Müller, K., Nolte, K., Payne, A., Reeves, M.G., Richardson, J., Santoni, G., Stäb, S., Tronrud, D., Williams, C, Thorn, A*. (2021) Making the invisible enemy visible (2021) Nature Structural & Molecular Biology 28, 404–408 https://doi.org/10.1038/s41594-021-00593-7
UTILE project: Autonomous Image Analysis to Accelerate the Discovery and Integration of Energy Materials; Andre Colliard Granero; IEK-13, FZJ
PtyNAMi - Ptychographic Nano-Analytical Microscope; Andreas Schropp; Center for X-ray and Nano Science CXNS, DESY
Towards distributed imaging for autonomous seismic surveys in multi-agent networks; Ban-Sok Shin; DLR
In situ nanotomographic X-ray imaging of magnesium implant degradation; Berit Zeller-Plumhoff; Hereon
Correlative Microscopy of the Rhizosphere - Progress and Challenges; Chaturanga D. Bandara; UFZ
Initial results using neuroimaging features to predict the genetic risk of RLS; Federico Raimondo; INM-7, FZJ
Computational and machine learning approaches for the genetic analysis of histological images; Francesco Paolo Casale; Helmholtz Munich
Federated Learning for Data and Model Privacy in Cloud; Hussain Ahmad Madni; University of Udine
Mapping Arctic treeline vegetation using LiDAR data in the Mackenzie Delta area, Canada; Inge Günberg; AWI
Multi-axes fusing for uncertainty estimation and improved segmentation of biodegradable bone implants in SRµCTs; Ivo Matteo Baltruschat; Research and Innovation in Scientific Computing, DESY
MRI reveals brain ventricle expansion in pediatric patients with acute disseminated encephalomyelitis; Jason Millward; MDC
Performance Portable Reconstruction of Ptychography Data with the Alpaka C++ Library; Jiri Vyskocil; CASUS, HZDR
Nuclear Verification from Space - Change Detection using Machine Learning and Sentinel-2 Imagery; Lisa Beumer; FZJ
MemBrain -- Analysis of Membranes in Cryo-Electron Tomograms; Lorenz Lamm; Helmholtz Munich
Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings; Lukas Klein; Helmholtz Imaging
3D Imaging Modalities and Algorithms at the GINIX X-ray microscope; Markus Osterhoff; Röntgenphysik Göttingen
Improving convolutional neural network comprehensibility via interactive visualization of relevance maps: evaluation in Alzheimer’s disease; Martin Dyrba; DZNE
Partially coherent simulations of PETRA IV Beamlines; Martin Seyrich; Center for X-ray and Nano Science CXNS, DESY
Multi-dimensional imaging of solar cells with scanning X-ray microscopy; Michael Stuckelberger; DESY
Automatized Diffraction Pattern Recognition for Scanning Surface X-Ray Crystallography of Polycrystalline Materials; Nastasia Mukharamova; DESY
ObiWan-Microbi & MicrobeSeg: Deep-Learning Tools for Microbial Image Analysis; Oliver Neumann; KIT
Automatic Recognition of Dead Sea Geomorphological Features; Osama Alrabayah; GEOMAR
Object detection in dehazed Optical and Infrared Images; Oscar Hernan Ramírez Agudelo; DLR
The Hidden Image of Thawing Permafrost: project overview and first results of the radar polarimetric analysis; Paloma Saporta; DLR
Helmholtz Imaging Modalities; Philipp Heuser; Helmholtz Imaging
AI-based Evaluation of cardiac real-time MRI with congenital heart disease; Philipp Rosauer; DLR
How to classify single white blood cells in unseen data from different domains?; Raheleh Salehi; Politecnico Di Torino
Raw Image space improves Single-Cells Classification in Acute Myeloid Leukemia; Rao Muhammad Umer; Institute of Computational Biology (ICB), Helmholtz Munich
The Space-Filling Curve Needle; Sandro Elsweijer; German Aerospace Center (DLR)
Simultaneous Mapping of Magnetic and Atomic Structure of Ferromagnets using Ltz-4D-STEM; Sangjun Kang; KIT
HistAuGAN: Style Transfer as Stain Augmentation Technique in Histopathology; Sophia Wagner; Helmholtz AI
Neuroimaging-Genetics bridge for better understanding of human brain organization; Talip Yasir Demirtas; INM-7, FZJ
DELAD: Deep Landweber-guided deconvolution with Hessian and sparse prior; Tomas Chobola; Helmholtz Munich
BaSiCPy: a napari plugin for microscopy illumination correction; Tingying Peng; Helmholtz Munich
Material-specific table-top EUV ptychography; Wilhelm Eschen; Helmholtz Institut Jena
DeStripe: A Self2Self Spatio-Spectral Graph Neural Network with Unfolded Hessian for Stripe Artifact Removal in Light-sheet Microscopy; Yu Liu; Technische Universität Munich
Jury Awards and Public Choice Awards
Medical research is increasingly dependent on the ability to capture, store, process, integrate, and analyse large volumes of data. Moreover, in the foreseeable future, the size of research studies will continue to increase, and the overarching driver is to improve research quality and to increase reproducibility. Large-scale studies require a step change in the way that data is treated.
The opportunities are accelerating with the common introduction of ML methods, such as convolutional neural networks, the information that can be quantitatively gathered from images will increase, and thus so will the value of imaging within the research lifecycle. ML-based image analysis can extract more information, with less human intervention, across larger study sizes.
This presentation will provide an overview of a number of Australian research infrastructure initiatives that attempt to bridge the gap between the instrument and computing capability, by providing imaging users with accessible, powerful and easy-to-use environments. I will present initiatives to integrate instruments, manage imaging research data at a national scale, provide crucial data analysis skills and accelerate the adoption of standards and machine learning techniques across areas such as structural biology and neuroimaging, and I will provide an overview of future data strategy being developed across the National Imaging Facility.
Magnetic Resonance Imaging (MRI) is a useful method for detecting focal macroscopic tissue abnormalities in the brains of patients with neurodegenerative disorders. A variety of imaging techniques are commonly used to estimate the brain myelin content. Myelin water fraction (MWF) mapping using MRI has enabled researchers to directly examine myelination and demyelination in both developing and diseased brains. T1-, T2-, and T2*-weighted multi-echo data have been proposed to estimate MWF in the human brain. Although, even for the relatively simple two pool signal models, consisting of myelin water and non-myelin associated water, the number of dimensions of the parameter space for acquiring MWF estimates remains high, which makes the parameter estimation challenging. The aim of this research is to improve the accuracy and precision of brain myelin content mapping. Utilizing geophysical joint inversion concepts, we propose a novel joint inversion imaging approach where data from multiple contrasts are combined in a single optimization process. When compared to state-of-the-art methods for MWF estimation, the proposed method is expected to be less biased and less susceptible to noise.
In this presentation we summarize all the achievements that were made within the first year of the Multisat4slows project (Developing multi-scale satellite-based imaging platform for change detection: application for landslide hazard classification and early warning service ), financed within the Helmholtz Imaging 2020 call. In particular, we introduce a new reference data base which has been compiled for training, testing and validating machine learning algorithms, discuss example applications on how various parameters extracted from SAR imagery improve change detection related to landslides, elaborate on our experience in using data-driven approaches for event timing detection, and finally introduce the first version of publicly available Data Analytics toolbox for rapid analysis of landslides, where the developed models are deployed.
Deep learning (DL) techniques are now ubiquitous in the field of computer vision and its various applications (classification, detection, and segmentation). Recent developments in 3D sensing technology and low-cost devices facilitate the process of 3D data collection. Light Detection and Ranging (LiDAR) and Structure from Motion (SfM) can rapidly generate centimeter to sub-centimeter resolution 3D point clouds, which are rapidly becoming essential to real-time applications such as autonomous driving and robotics. In Hyper 3D-AI project, we focus on solving the classification and segmentation tasks of multi-sensor 3D data using deep learning, i.e. the proposed methods would utilize different information that are acquired by the sensors to achieve a segmentation that is sensitive to geometric, textural and spectral attributes. We propose different deep models that can handle 3D data represented as point clouds and train them for the classification and segmentation tasks.
The Earth's climate is changing, and science should provide better information about future climate change impacts on global and regional level. Current developments in High Performance Computing and Climate modelling, allow us to simulate the coupled Climate System with unprecedented level of detail, using spatial resolution of several kilometres in atmosphere and ocean. Those simulations, called Digital Twins of Earth (DTE), produce very large amounts of data that is hard to visualise and explore. We will discuss opportunities and challenges that come with emergence of DTE, including the need for new visualisation and analysis tools, image based data archiving and distribution as well as use of those data for educational and science communication purposes.
Time-lapse microscopy in combination with precisely controllable microfluidic lab-on-a-chip systems allows observing the emergence of microbial populations starting from one single cell. Live-cell imaging is thus a powerful tool for studying heterogeneity of cell growth, morphological development, or cell-to-cell interaction. Such insights are door openers for predicting strain performances and engineering synthetic microbial cultures for industrial application. The key barrier to extracting spatiotemporal information from live-cell imaging experiments comprising hundreds of time-lapse image stacks each is an accurate and automated cell segmentation.
New deep-learning methods offer high quality and throughput for image processing of microbial time-lapses, scaling to hundreds of thousands of cells. To achieve top accuracy beyond specific imaging modalities and microbial morphologies, however, segmentation models require an enormous amount of ground truth data for training. Also, available approaches are difficult to handle due to heavy software and hardware dependencies. Within the SATOMI project, we developed two open access tools, microbeSEG and ObiWan-Microbi, that cover the full workflow of deep-learning methods in microfluidic live-cell analysis: microbeSEG facilitates the creation and management of ground-truth datasets and training of deep-learning segmentation, while ObiWan-Microbi provides a suite of semi-automated annotation tools and unlocks deep-learning execution in the cloud.
In our talk, we start with outlining the challenges arising in the analysis of large amounts of microfluidic image data from the different “user” and “developer” perspectives. We then present our deep-learning workflow for creating custom cell segmentation, which we showcase in action with a challenging example of a microbial consortia time-lapse imagery yielding results that were previously impossible to extract.
Although developed in the field of microbial live-cell imaging, the workflow is generally applicable whenever segmentation and image processing is employed.
Through its magnetic activity, the Sun governs the conditions in Earth's vicinity, creating space weather events, which have drastic effects on our space- and ground-based technology. The identification of the coronal Holes on the Sun as one of the source regions of the solar wind is therefore crucial to achieve predictive capabilities. In this study, we used an unsupervised machine learning method, k-means, to pixel-wise cluster the passband images of the Sun in the wavelenghts 171 Å, 193 Å, and 211 Å. Our results show that the pixel-wise k-means clustering together with systematic pre- and post-processing steps provides compatible results with those from complex methods, such as CNNs. More importantly, our study shows that there is a need for a CH database that a consensus about the CH boundaries are reached by observers independently. This database then can be used as the "ground truth", when using a supervised method or just to evaluate the goodness of the models.
Participants Choice
Meaningful performance assessment of image analysis algorithms depends on objective and transparent performance metrics. Despite major shortcomings in the current state of the art, so far only limited attention has been paid to practical pitfalls associated with the use of particular metrics for an image analysis task. Therefore, a number of international initiatives have collaborated to provide researchers with guidance and tools for selecting performance metrics in a problem-aware manner. Based on input from experts from more than 60 institutions worldwide, we believe our metric recommendation framework to be useful to the community and to enhance the quality of image analysis algorithm validation.
Ptychography is a high-resolution imaging technique known for being capable of reaching sub-10 nm resolution. A coherent beam is required to reach this resolution. Currently, the coherent fraction of the beam for third-generation synchrotron sources is only 10%. Thus, ptychography needs to isolate the coherent flux. Consequently, more than 90% of the incoming X-rays are wasted. This reduces the total flux on the sample, which leads to an increased scan time or reduced spatial resolution due to a lower signal-to-noise ratio. The need to choose between high resolution and a large field of view is often a show stopper for many experiments. This problem is especially dire for biological samples, where a single specimen often is not representative of a population. The latest advances in ptychographic algorithms provide the possibility of introducing mutually incoherent modes. This was adapted in recent developments for visible light microscopy with multiple independent beams. The sample is scanned simultaneously, not by one beam but by many, which can be mutually incoherent from each other. Each mode in the algorithm is used to reconstruct one beam and a corresponding region of the sample, expanding the scan area by the number of beams. We have successfully implemented this technique in the X-ray regime using up to 6 beams in parallel. Each beam was uniquely coded, which provided robust disentangling of the diffracted signal from different sample areas and thus artifact-free reconstructed object.
Moderators: Christian Schroer, Sara Krause-Solberg
Three talks on future topics related to imaging that should trigger the discussion to gather voices from the imaging community. Each talk of 15min is followed by a discussion of 10min.