Speakers
Description
Improving our understanding of brain structure and function requires multimodal imaging techniques that provide a detailed view of neuronal architecture. At the microscopic level, different histological stainings and optical methods provide complementary insights into the composition of cells and fibers. However, they typically cannot be applied to the same tissue sample, which makes the combined analysis of histological modalities challenging.
3D polarized light imaging (3D-PLI) is performed on the tissue sections from frozen brains and does not require tissue staining. Therefore, it is possible to apply stainings like Cresyl violet after the 3D-PLI measurement. Data generated in this way provide a combined insight into the cytoarchitecture and connectivity of nerve fiber structures for the same tissue sample. However, the staining process induces unavoidable additional distortions to the sections after the 3D-PLI measurement, thus requiring a nonlinear alignment of the two modalities for multimodal analysis. This alignment is impeded by a lack of common texture features between cell body stainings and 3D-PLI. Such a tedious process of creating a joint database motivates an investigation into which cytoarchitectonic features could be already available within 3D-PLI. Automatic extraction of such features would enable large-scale multimodal data analysis and expand the interpretation of 3D-PLI.
In this work, we explore the possibility of learning an image representation that closely matches Cresyl violet stains directly from 3D-PLI images, building on recent advances in image-to-image translation and style transfer. We aim to recover a maximum degree of pixel-aligned cell structures from the 3D-PLI images. Therefore, we use pixel-to-pixel comparisons between both modalities and style-sensitive models, such as generative adversarial networks (GANs) and neural style transfer (NST). As training data, we use 11 sections from a vervet monkey brain for which joint 3D-PLI measurements and Cresyl violet staining of the same sections were acquired. Global and pixel-precise registration of the stained to the 3D-PLI images for the acquisition of paired training data was infeasible as 3D-PLI and the staining images have only a few visible features in common. Therefore, we first perform an affine alignment of representative regions at a coarse scale and then during the training process use an additional exhaustive linear ad-hoc alignment focusing on smaller patches. Such an approach allows pixel-accurate comparison of predicted cell structures by our method with measured ground truth for training a deep learning network.
Initial studies show that our method can reconstruct the structures and positions of most cell bodies revealed in the Cresyl violet staining directly from 3D-PLI images. The generated images support the registration process of scanned stained sections after 3D-PLI measurement to its reference frame, as the artificially generated staining resembles the real staining much more than the original 3D-PLI measurements.
The developed method is a promising tool to support large-scale multimodal imaging for 3D-PLI measurements and Cresyl violet stainings for the same tissue samples. Motivated by the findings, we plan to collect more training data in different brain regions and species and optimize the method for application in more extensive experiments.