Speaker
Description
Spatial omics technologies enable the study of molecular distribution patterns within the brain, offering critical insights into cellular organization and function. A significant challenge in this field is accurately mapping spatial omics data onto existing 3D brain atlases. Successful mapping would facilitate multimodal atlas creation, allow precise transfer of brain area labels from atlases to spatial omics samples, and thus unify brain area annotations across diverse experiments. Current approaches rely on image registration methods, which performs well for 2D-to-2D or 3D-to-3D alignment. However, spatial omics often necessitates 2D-to-3D alignment, requiring integration of sparse 2D omics slides into comprehensive 3D brain atlases. Additionally, traditional image-based registration methods fail to leverage the rich molecular abundance and distribution data intrinsic to spatial omics samples.
To address these limitations, we propose a novel deep learning-based, feature-driven approach for mapping spatial omics data onto 3D atlases. Our strategy begins by creating robust unimodal embeddings for each data modality. To subsequently map unimodal omics embeddings to histology embeddings, we employ multimodal machine learning techniques based on self-supervised contrastive learning and multimodal optimal transport. We evaluate learning strategies both with and without leveraging spatial location information derived from image-based registration, aiming to anchor spatial omics samples effectively within existing 3D brain atlases.
In this talk, I will provide an overview of our current unimodal and multimodal representation learning frameworks, highlighting recent results from integrating mouse brain histological atlases with spatial transcriptomics (MERFISH) and spatial lipidomics (MALDI-MSI) data. Furthermore, I will discuss how these results can be extended and applied to BigBrain and human brain samples.