- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
ATTENTION! We have to do a short maintenance with downtime on Tue 7 Oct 2025, 09:00 - 10:00 CEST . Please finish your work in time to prevent data loss.
You are cordially invited to attend the 9th BigBrain Workshop, taking place in Berlin, Germany, on October 28 and 29, 2025.
This workshop has established itself as the annual meeting place for the BigBrain community to come together and present their latest research, discuss prospects of the BigBrain associated data and tools, and brainstorm on how to leverage high-performance computing and artificial intelligence better to create multimodal, multiresolution tools for the high-resolution BigBrain and related datasets.
The BigBrain Workshop will be held in conjunction with a Training Day, taking place as a full-day event on October 27, on-site at the conference venue.
This year’s workshop also serves as the Closing Symposium of the Helmholtz International BigBrain Analytics and Learning Lab (HIBALL), highlighting the remarkable achievements of this transatlantic collaboration. To mark this special occasion, we will feature an exceptional lineup of distinguished speakers, not only to celebrate HIBALL’s success but also to explore the future of brain science, fostering discussions on innovative methods and applications in the field.
The HIBALL consortium, together with the BigBrain community, remains committed to advancing this extraordinary dataset, continually expanding the knowledge and technical infrastructure that underpins cutting-edge research in neuroscience and beyond.
Mark your calendars and plan to attend this exciting event in Berlin. We hope to see you there!
The event is free of charge but prior registration is required.
Registration & Abstract submission are open!
Speakers |
|||||
Helen ZhouCentre for Sleep and Cognition |
![]() Kâmil UludağSunnybrook Research Institute, University of Toronto |
Andreas HornNetwork Stimulation Laboratory |
|||
Petra RitterBrain Simulation Section |
Dagmar KainmüllerBiomedical Image Analysis |
Registration open: | March 1, 2025 |
Abstract submission open: | May 28, 2025 |
Abstract submission due: | |
Acceptance notifications: | |
Registration deadline: | |
HBHL Training Day: | October 27, 2025 |
BigBrain Workshop: | October 28 and 29, 2025 |
Montreal Neurological Institute McGill University Montreal Alan C Evans Paule-Joanne Toussaint |
Institute of Neuroscience and Medicine (INM-1) Forschungszentrum Jülich Katrin Amunts Susanne Wenzel |
|
Please contact the programme committee if you have any questions. We will continuously update the information on this page and also share information via LinkedIn (@BigBrainProject) and e-mail.
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
For some registered participants, mails from this platform seem to end up as spam. To make sure to receive all information, please check your junk folder.
Per default all times are given in CET. To make sure to refer to your local time, please set your time zone accordingly. Click the menu at the top right of this page.
Advances in brain imaging and AI provide an unprecedented opportunity to explore the human mind and develop new approaches for treating neurological disorders. Each neurodegenerative disorder affects distinct large-scale brain networks. This talk will focus on brain network phenotypes in neurological disorders such as Alzheimer’s and cerebrovascular disease. Specifically, how these network phenotypes relate to pathology, help identify at-risk groups, and predict cognitive decline. Our recent work on AI-driven models for brain decoding and interpretable brain foundation models with efficient adaptation strategies will also be discussed. Moving forward, integrating AI with brain imaging paves the way for improved early diagnosis and treatment strategies for neuropsychiatric disorders.
Dr. Juan Helen Zhou is an Associate Professor at the (Centre for Sleep and Cognition)[https://medicine.nus.edu.sg/csc/], and Director of the (Centre for Translational Magnetic Resonance Research)[https://medicine.nus.edu.sg/tmr/], Yong Loo Lin School of Medicine, National University of Singapore (NUS). She is also affiliated with the Department of Electrical and Computer Engineering at NUS.
Her research focuses on selective brain network-based vulnerability in aging and neuropsychiatric disorders, leveraging multimodal neuroimaging and machine learning approaches. Dr. Zhou received both her Bachelor’s and Ph.D. in Computer Science from Nanyang Technological University, Singapore. She completed multiple postdoctoral fellowships at the Memory and Aging Center, Department of Neurology, University of California, San Francisco; the Computational Biology Program at the Singapore-MIT Alliance; and the Department of Child and Adolescent Psychiatry at New York University.
Dr. Zhou has served as a Council Member and Program Committee Member of the Organization for Human Brain Mapping (OHBM) and is a recognized OHBM Fellow. She serves on the Advisory Board of Cell Reports Medicine, and has held editorial roles with eLife, Human Brain Mapping, and Communications Biology. Her research has been supported by various funding bodies in Singapore, the Royal Society (UK), and the NIH (USA).
Dr. Juan Helen Zhou will deliver the BigBrain Project Educational Lecture at the HBHL Training Day.
Lab Web: www.neuroimaginglab.org
Twitter: @HelenJuanZhou
Centre for Sleep and Cognition: medicine.nus.edu.sg/csc/
Centre for Translational MR Research: medicine.nus.edu.sg/tmr/
Quantifying and visualizing the structure of the cortex along its layers and across its depth can reveal architectonic boundaries between different cortical regions.
One approach to such analysis is constructing cortical surfaces, i.e, the inner (white matter) and outer (pial) borders of the cortical sheet.
The surfaces provide an easy way for sampling and analyzing MRI data at different locations in the cortex, and allow for surface-based registration across subjects for group studies.
In this session, I will cover some of our recent efforts to study the geometrical determinants of layer placement in the cortex as well linking the layers and histological data.
The practical part will demonstrate how to generate layer models, map MRI data onto the layers at different depths and visualize it, how to transform the data to average space, and how to map it to existing atlases such as the BigBrain.
Oula Puonti is a senior researcher at the Danish Research Centre for Magnetic Resonance with a background in computer science and medical image analysis.
His research focuses on developing open source computational tools to analyze brain structures from a wide variety of MRI scans ranging from high-resolution scans acquired with ultra-high field scanners to low-resolution clinical scans.
The tools Oula has helped develop are incorporated into widely used software tools for brain image analysis (FreeSurfer) and biophysical modeling of the whole head (SimNIBS).
Oula's recent focus has been on modeling the cortical laminae from ex vivo data and applying these models to in vivo scans.
The regional microstructure of cortical brain areas, along with their connectivity to other regions, is linked to their functional profile. Consequently, microstructure varies significantly between different brain region. Along with modern image analysis methods, the BigBrain provides a unique resource for quantifying microstructure in terms of numbers, densities, and distributions of cell bodies at different locations in the brain. In this tutorial, we demonstrate how the siibra toolsuite can be used to access micrometer resolution BigBrain image data and extract cortical image patches for custom regions of interest. We will show how locations can be specified or sampled in interactive and scripted workflows, and demonstrate how state of the art AI models can be used to extract and quantify cell instances from extracted image patches in a reproducible fashion. We will present a dataset of layer-specific cell densities for areas defined in the Julich-Brain cytoarchitectonic atlas, which has been created on the basis of these ideas and is available through siibra.
Timo Dickscheid is a Professor for Microscopic Image Analysis at Heinrich-Heine University Düsseldorf, and head of the "Big Data Analytics" group at the Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich, Germany. He is a computer scientist by training and earned his PhD in the field of Computer Vision and Photogrammetry at the University of Bonn in 2011. Dickscheid joined Forschungszentrum Jülich as a post-doc in 2010 to develop image analysis methods for microscopic imaging. After accepting a position as the head of Information Technology at the German Federal Institute of Hydrology in Koblenz in 2012, he returned to Jülich in 2014 to setup his own research group. Aiming to build a cellular resolution multimodal model of the human brain, his work addresses distributed data management for high throughput imaging, AI methods for large-scale biomedical image analysis, and software interfaces for working with very large image data. Dickscheid leads the development of brain atlas
Sebastian Bludau is a senior researcher at at the Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich. He graduated with a diploma as a biologist at the Heinrich-Heine University in Duesseldorf and received a PhD in theoretical medicine from the RWTH Aachen University in 2011, where he studied the cytoarchitecture of the frontal pole of the human brain. Subsequently he became post-doc at the INM-1 (Structural and functional organization of the brain) at the Institute of Neuroscience and Medicine at the research center Jülich. His current research is mainly about the cytoarchitecture of the human brain, the integration of different imaging modalities into high resolution reference spaces, high-throughput optical microscopy and developing and testing of new prototype software for the analysis of histological images.
Requirements: A laptop with an up-to-date web browser (Chrome or Firefox is recommended) is required for the hands-on examples. All examples will be run on pre-built Jupyter notebooks, which will be provided for downloading. Please register for an EBRAINS account in advance.
The EBRAINS Knowledge Graph (KG) is the central metadata management system of the EBRAINS research infrastructure, responsible for registering and integrating research products, such as data, computational models, and software, in a searchable graph database. In this session we will demonstrate the capabilities of the KG Search user interface (UI) and teach you how to register your own research products in the EBRAINS KG. Moreover, we will demonstrate how to programmatically work with the metadata registered in the KG using the Python package fairgraph.
lyuba.zehl@ebrains.eu
Data and Knowledge Engineer and Coordinator at EBRAINS Brussels
Lead Developer and Coordinator of openMINDS
CBRAIN is a web portal that provides seamless access to high-performance computing clusters, and is a component of the NeuroHub ecosystem of neuroinformatics tools. This hands-on, interactive tutorial will cover the main functionalities of CBRAIN for the processing and management of data, illustrate them on BigBrain data, and demonstrate their interaction. Practical examples using scientific tools, such as Hippunfold and Cell Detection, in CBRAIN will be covered along with how to access BigBrain-related datasets in various repositories.
In this tutorial participants will learn how to access and use CBRAIN to access and process BigBrain data on HPC resources through an easy-to-use web portal. Specific topics covered will include:
bryan.caron@mcgill.ca
Bryan Caron is leader of the CBRAIN platform and Co-Principal Investigator and Director, Operations and Development of the NeuroHub project, a core facility platform of McGill University’s Healthy Brains, Healthy Lives initiative. Prior to leading CBRAIN and NeuroHub, Bryan was a Research Scientist, Adjunct Professor and Director, Business Operations of the McGill High Performance Computing Centre. Bryan has over 25 years of experience in high performance computing and data intensive science.
In this 1.5 hour session, participants will get an introduction to DataLad, an open source software tool for distributed data and reproducibility management, built upon Git and git-annex. With both a conceptual overview of the tool and its major features, as well as a hands-on component, this session focuses on technical and conceptual aspects alike. It aims to show participants how to use Datalad for their research and for which use cases it is suitable.
Michael Hanke is a professor at the Heinrich Heine University Düsseldorf, and head of the Psychoinformatics group in the Institute for Neuroscience and Medicine (INM-7) at the Forschungszentrum Jülich. He has co-created several neuroinformatics software projects, among them the Neurodebian project, PyMVPA, and DataLad.
Adina Wagner is a research associate and scientific coordinator at the Institute of Neuroscience and Medicine at the Forschungszentrum Jülich. She contributed to the DataLad project as a software developer and co-created the DataLad Handbook documentation project. She is a proponent of open science, open source software, and reproducible research.
The release of a Drosophila melanogaster central brain connectome reconstructed from electron microscopy (Scheffer et al., eLife 2020), as well as a large resource of transgenic Drosophila lines and respective sparse light microscopy acquisitions (Meissner et al., eLife 2023) have paved the way towards functional studies of a vast collection of individual neurons in vivo. We present here computational and AI approaches for reconstructing and matching individual neuronal morphologies across imaging modalities, which we have employed at scale to link avaliable electron- and light microscopy resources by means of solving billions of optimization problems.
Dagmar Kainmüller heads the Biomedical Image Analysis Lab at the Max-Delbrueck-Center for Molecular Medicine Berlin and the Berlin Institute of Health. The lab pursues theoretical advances in ML to solve challenging image analysis problems in biology, with a focus on cell segmentation, classification, and tracking. The aim of the lab is to facilitate scientific discovery via automated analysis of high-throughput microscopy data.
BigBrain2 is a second BigBrain dataset that complements and builds on our expertise from the first BigBrain [1], providing new insights into variations between brains at whole-brain and cytoarchitectonic level. The brain (30-year-old male donor) was formalin-fixed, paraffin-embedded, and sectioned coronally (20 μm) into 7676 sections. Each section was stained for cell bodies (Merker stain). The sections were scanned at 10 μm in-plane (flatbed scanner) and 1 μm in-plane.
We have developed a new approach to assist the labour-intensive process of correcting the artefacts in the histological images, and, subsequently, to reconstruct the high-resolution 3D volume, with correction for staining imbalances. Despite a significantly improved wet-lab processing pipeline, sectioning and histological preparation of a whole brain at this thickness remains a challenging task, leading to a heterogeneity in the extent and severity of artefacts, rendering a fully automated repair process of all sections impracticable.
Initially, the 10 μm sections were resampled at 20 μm in-plane, to match the section thickness, and every fifth section (5-series) was repaired manually, ensuring data provenance tracking [2]. An initial 3D reconstruction at 100 μm has been created [3]. Remaining sections were processed sequentially; larger artefacts were identified and manually corrected [4]. For the remaining artefacts, each section was registered to the two nearest repaired sections of the 5-series, from which a virtual reference image was interpolated at the position of the target section. Smaller artefacts (e.g., missing data) were corrected by interpolating good tissue from the reference section in place of the missing tissue in manually identified areas.
To support the 3D reconstruction, tissue masks were created in an automated fashion using the nnU-Net algorithm [5], with a combination of global and local training sets. The approach was extended to obtain a tissue classification for white matter, grey matter, and layer-1 on the repaired histological sections, every 100 μm apart, using a training set of 77 sections 2 mm apart. Unlike global 3D tissue classification, nnU-Net provided a fast and robust 2D segmentation insensitive to staining imbalances and could suitably distinguish layer-1 from white matter, despite both tissue classes showing overlapping cell-body stain intensity distributions.
The masked repaired sections were aligned to the post-mortem MRI of the fixed brain in an iterative process by 3D registration of the stacked images to the MRI, followed by 2D registration of the individual images to the sliced MRI, while gradually increasing the complexity of 2D and 3D registration from rigid-body to affine to non-linear. These global iterations helped resolve the lower-frequency alignment errors causing ‘jaggies', as were present in BigBrain1, and accounted for tissue compression and shrinkage during histological processing. Finally, section-to-section non-linear 2D alignment (without MRI) was performed to resolve high-frequency alignment errors. Optical-balancing was applied to correct for staining imbalances across the brain volume.
The new pipeline resulted in a first high-quality 3D reconstruction of the histological images, currently available at 100 μm. The dataset was further enriched by cortical surfaces and annotations. As part of the Julich Brain Atlas [6] 126 cortical and subcortical structures have been annotated in the histological sections with a resolution of 20 μm. The hippocampus [7] has been mapped in the 3D reconstructed data set.
References:
Background and aim: The brain’s vasculature is critical for sustaining neural function and shapes in vivo neuroimaging signals such as BOLD fMRI. The microvasculature, including capillaries and arterioles, is closely tied to sites of neural activation, while macrovasculature supplies and drains broader regions. Classical ink-injection studies suffered from limited penetration into finer vessels¹,², while in vivo MRI methods remain limited in spatial resolution. To improve vascular visualization, we developed a whole-brain atlas combining post-mortem (immuno)histochemistry with high-resolution MRI. We quantified vascular scales across 31 subcortical structures, providing a reference framework for vascular architecture.
Methods: A healthy donor brain (male, 76y) was obtained through a whole-body donation program (Amsterdam UMC; Fig. 1A). After 10 % formalin perfusion fixation, quantitative MRI maps of longitudinal relaxation rate R1, transverse relaxation rate R2* and proton density were acquired at 400-μm and 200-250-μm isotropic resolution on a Magnetom 7T scanner (Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Leiden UMC)³,⁴. The brain was then cryoprotected in sucrose, frozen in TissueTek, and coronally sectioned into 811 slices at 200 μm using a cryomacrotome (-16 °C). Blockface images were captured for MRI alignment and histology registration (Fig. 1B). Sections were stained alternately with CD31 (PECAM- 1), SMA (Smooth Muscle Actin), and Bielschowsky Silver to visualize the vascular bed and scanned at 21-μm in-plane resolution (Fig. 1C–E). Color deconvolution was performed in Python using scikit-image’s Hematoxylin+DAB matrix (Fig. 1F), followed by vessel extraction and 3D mapping using Nighres filtering (Fig. 1G).The MASSP2.0 algorithm was applied to parcellate 31 subcortical structures on the 3D blockface reconstruction³,⁵. Labels were projected onto the aligned histology stack to identify anatomical structures (Fig. 1H). Within each MASSP2.0-defined region, vessel extractions were performed and densities quantified (Fig. 1I). Additional microscopic assessments characterized vessel morphology in SMA- and CD31-stained sections.
Results: We developed a comprehensive dataset of vessel densities across all MASSP2.0-defined structures (Fig. 1J), along with regional vessel morphology and orientation. In general, grey matter regions like the accumbens and thalamus showed denser capillary (CD31) perfusion than white matter structures, which are dominated by larger vessels (arteries). CD31 densities varied greatly across regions, ranging from 33% in the accumbens to 5% in the posterior commissure. SMA density percentages varied less, with 16% in the globus pallidus pars externa to 4% in the CA1.
Discussion: The integration of high-resolution post-mortem MRI with (immuno)histochemistry creates a framework that captures the broader anatomical context and fine-scale vascular profiles across (sub)cortical structures, addressing current gaps in vascular mapping. These data will form the basis for future studies aimed to ultimately improve interpretation of BOLD fMRI signals and account for regional vascular variability. The dataset will be made freely available upon publication of the full paper to support its use as a benchmark in future studies.
¹ Duvernoy HM et al. Brain Res Bull. 1981;7(5):519-79.
² Lauwers F et al. Neuroimage. 2008;39:936-48.
³ Alkemade A et al. Sci Adv. 2022;8(17):eabj7892.
⁴ Alkemade A et al. Front Neuroanat. 2020;14:536838.
⁵ Bazin P et al. Imaging Neurosci. 2025;doi:10.1162/imag_a_00560.
In the computational modeling of the neural correlates of consciousness, many efforts have focused on the whole-brain scale [1]. Although this powerful approach successfully captures global dynamics, it may overlook potentially insightful details at the population and microcircuit levels. An alternative strategy is to concentrate on a limited subset of nodes considered most essential and to represent them with greater biological specificity, embedded in a whole brain network – i.e. a multiscale modeling approach [2]. Moreover, growing evidence points to the contribution of single-cell computations, namely in layer 5 cortical pyramidal neurons, to the maintenance of conscious states [3]. Because these dynamics can be integrated into mesoscale and even macroscale models [4], their incorporation represents a natural and important next step in advancing computational approaches to consciousness.
Here, we present a biophysically detailed cortico-subcortical spiking network model that covers cellular, microcircuit, and brain network levels. The model includes a subset of cortical regions selected for their theoretical significance [5], clinical relevance to the disorders of consciousness (DoC) [6, 7], and availability of experimental recordings. These mainly include nodes of the default mode, frontoparietal, and salience networks. Each cortical region of interest is represented as a column, organized into superficial, middle, and deep layers. Thalamus comprises the intralaminar nuclei and the reticular nucleus. The inclusion of basal ganglia (striatum and globus pallidus internal) further enables the model to test predictions of the fronto-striato-thalamic circuit model for recovery of consciousness [8]. Interareal projections are constrained by layer-specific anatomical structural connectivity. This laminar resolution enables examination of two mechanisms thought to be crucial for maintaining conscious states: the cellular mechanism for coupling inputs arriving at different cortical layers [4], and the interaction of feedforward and feedback streams of cortico-cortical communication [9, 10]. The model was implemented in NetPyNE [11], a tool for modeling large neural circuits with the NEURON simulator.
Although the focus of this work is on modeling pathological impairments of consciousness in DoC, we have preliminarily validated the thalamocortical module against empirical markers of medically induced loss and recovery of consciousness [12, 13], which also helped constrain parameter values. The cortical and thalamocortical components were adapted from prior modeling works on anesthesia [14, 15]. The next step is to extend the validation to the full network to assess its capacity to reproduce pathological states of DoC, by manipulating long-range cortico-cortical and thalamo-cortical pathways to mimic the structural disconnections characteristic of these conditions.
The proposed framework aims to provide a unified mechanistic platform for capturing signatures of conscious states, with particular relevance to pathological conditions. Its biophysical plausibility enables testing hypotheses about pharmacological, invasive, and non-invasive interventions. The model offers a means to assess metrics such as fronto-parietal communication breakdown, frequency-band alterations, changes in structure-function correlation, neural complexity and integration and segregation of information. Ultimately, this work seeks to establish a generative platform that bridges theoretical, experimental, and clinical perspectives on the mechanisms of consciousness and its disorders.
Introduction: Multimodal image co-registration between magnetic resonance imaging (MRI) and histology is a critical yet challenging task in neuroscience research. Understanding brain microstructure requires combining MRI, which provides non-invasive and versatile brain information at spatial resolutions ranging from 70 to 100 micrometers, with histology, which delivers subcellular detail at resolutions ranging from ~0.1 to 1.1 micrometers. Accurate co-registration of these complementary modalities is essential for investigating tissue changes across diverse neurological disorders. However, registering 2D histological sections to a 3D MRI volume is intrinsically difficult due to cross-modality contrast differences, resolution mismatch, and section defects such as tears, warping and missing pieces. Consequently, existing registration tools are often not directly compatible with our datasets, particularly when histological tissue is fragmented or incomplete.
Method and Materials: Our study uses a dataset of 22 adult rat brains from a moderate Traumatic Brain Injury (moTBI) model, scanned at 3- and 28-days post-injury using in-vivo and ex-vivo MRI, followed by histological sectioning and staining with Myelin and Nissl protocols. We implement and adapt RAPSODI (RAdiology Pathology Spatial Open‐Source multi‐Dimensional Integration) to our rat brain moTBI dataset. RAPSODI is an open-source image registration framework originally developed for prostate cancer patients, specifically to register presurgical MRI with histopathology images from radical prostatectomy specimens [1]. RAPSODI consists of three-stage pipelines. First, it generates a 3D reconstructed digital model of histology which serves as the representation of the tissue before sectioning. Second, the MRI slices and the corresponding histology slices are registered. Third, it uses optimized alignments to project the prostate and the cancer regions delineated in histological sections back onto MRI volume [1]. In our adaptation, we replace the prostate and cancer regions with our structural masks of brain anatomy and modify preprocessing pipelines to accommodate the format of our dataset. For the present analysis, we adapted the tool to a single rat brain of which the ex-vivo MRI data was acquired at 11.7 Tesla with T2-w sequence that has an isotropic spatial resolution of 70 micrometers. Histological sections were scanned at 0.1369 micrometers in-plane resolution and 60 micrometers through-plane resolution.
Results: As an initial implementation stage, we selected 30 slices of Nissl-stained coronal sections and corresponding ex-vivo MRI slices from a single rat brain. The histological scans were subsequently downsampled to 1.095 micrometers of in-plane resolution. Our co-registration optimization employed mean squared error with a reduced learning rate (0.005) of the gradient descent and stricter constraints on rotation, scaling, and shearing transformations. Then we co-registered the slices using the adapted RAPSODI framework to evaluate registration quality. The quality of the co-registration and alignment can be seen in the Figure1.
Conclusion: We present preliminary results for histology-MRI co-registration in rat brains. Future work will include a landmark-based evaluation and extend analysis beyond a single animal to the full cohort, reporting quantitative metrics alongside qualitative overlays.
[1] Rusu M, Shao W, Kunder CA, et al. Registration of presurgical MRI and histopathology images from radical prostatectomy via RAPSODI. Med Phys. 2020;47(9):4177-4188.
Dense mapping of cortical connectivity at the voxel/vertex level can provide a more precise view of connectomic organization compared to the conventional approach of studying connectivity at the level of brain areas. This approach has been previously applied to resting-state functional connectivity (RSFC), providing important insights into the organization of cortical networks (Yeo et al., 2011) and boundaries (Gordon et al., 2016; Schaefer et al., 2018), as well as principal gradients of connectivity variation (Margulies et al., 2016). However, RSFC characterizes task-free, spontaneous functional connectivity, and it remains unclear to what extent the connectomic organization observed in the resting state generalizes to task-related conditions. Meta-analytical co-activation modeling (MACM) characterizes consistent task-based co-activations across published neuroimaging studies, and thereby provides a marker of task-related functional connectivity (Langner & Camilleri, 2021). While MACM has often been applied to specific regions of interest, a dense whole-cortex characterization of MACM at the voxel/vertex level is lacking.
Here, we used a GPU-based implementation of specific co-activation likelihood estimation to calculate dense MACM across the cerebral cortex. We create a dense MACM matrix for 26,459 gray matter voxels in MNI space (4-mm resolution) using the BrainMap dataset. Cortical seeds were mapped to the CIVET surface (76,910 vertices) for the subsequent analyses.
We quantified MACM strength as the overall meta-analytical co-activation rate, which was highest in frontal medial cortex, middle temporal gyrus, supplementary motor area, posterior cingulate and precuneus (Figure 1A). Next, we computed similarity of MACM patterns across all cortical vertices, resulting in the MACM$_{corr}$ matrix. Clustering analysis on MACM$_{corr}$ using K-means showed relatively stable solutions with 4, 6, 11, and 19 clusters (Figure 1C). We focused on the six-cluster solution (instability = 0.13, variance explained = 66.8%; Figure 1B), on par with the seven canonical RSFC networks (Yeo et al., 2011). There was a broad correspondence between the MACM and RSFC networks (Figure 1D), with some of the MACM networks aligning with single RSFC networks (e.g., N1 and N2 corresponding to the visual RSFC network), and some spanning multiple RSFC networks (e.g., N6 spanning limbic and default mode RSFC networks). Next, we applied principal component analysis on MACM$_{corr}$ and identified three principal axes that explained 79.5% of variance (Figure 1E,F). We found the first and second principal axes of MACM to be largely co-aligned with the first and second gradients of RSFC (Figure 1G).
Together, we provide a dense high-resolution characterization of cortical MACM. Based on clustering and principal component analysis, we found that task-related co-activation shows a macroscale organization that is broadly aligned with RSFC, but has additional specific characteristics. As the next phase of this project, we plan to extend our analyses to higher spatial resolution, directly compare and integrate dense MACM with dense RSFC, and generate multimodal boundary maps of sharp transitions in co-activation and connectivity patterns.
The reward network plays a key role in motivation and decision-making and is known to be disrupted in numerous psychiatric conditions. Despite its functional importance, a comprehensive microstructural characterization of this network, including both its interacting cortical and subcortical components, remains sparse. Recent research, including findings from humans and non-human primates, suggests that cortical brain regions with similar microstructure are more likely to be interconnected. Accordingly, understanding human cognition, drive, and affect may benefit from integrating local microarchitectural features with the brain’s macroscale functional organization. Advances in digitized human brain datasets, including post-mortem 3D histological resources, now allow for more detailed analyses that support and refine magnetic resonance imaging (MRI)-based findings. This study provides a multimodal characterization of the cortical and subcortical components of the reward network by combining structural and functional MRI with histological data as supporting evidence.
Twelve participants (7 females, mean age = 26.7, SD = 4.2) underwent three sessions of ultra-high-field (7T) MRI. Participants first completed a T1 relaxometry scan (MP2RAGE; 0.5mm isovoxels), which was used to segment the cortical gray matter and subcortical structures. Image intensities sensitive to intracortical myelin content were extracted from locations within a reward network mask defined a priori using the NeuroSynth database (Fig1A). We divided the mask across 20 uniform T1 intensity bins to capture spatial variation in the network myeloarchitecture. The data were then use to examine relationships between myelin content and intrinsic functional connectivity within the reward network.
T1 image intensities separated cortical and subcortical components of the reward network, with cortical regions generally showing higher T1 than subcortical structures (Fig1B). In the cortex, structural and functional measures were tightly coupled, with higher similarity in intracortical myelin predicting stronger intrinsic functional connectivity across bins (mean r = -0.34, SD = 0.16, p < 0.001). Subcortical areas, in contrast, showed a more heterogeneous pattern of structure-function coupling, with both positive and negative correlations between myeloarchitectural similarity and functional connectivity (mean r = 0.004, SD = 0.14, p = 0.383) (Fig1C). To further probe these patterns at a cellular level, we used the 3D BigBrain dataset to extract high-resolution staining intensity profiles. Across all vertices and voxels, BigBrain-derived intensities positively correlated with myelin-sensitive image intensities (r = 0.37, p < 0.001), supporting the sensitivity of our imaging measures to microarchitectural variation. This pattern was also evident in subcortical regions (Pearson: r = 0.36, p < 0.001).
Together, these findings demonstrate that the reward network is not a uniform system, but is composed of cortical and subcortical components that differ in their microstructural profiles and association to large-scale functional network architectures.
Mental rotation (MR) is a crucial process that underlies our ability to spatially navigate and recognize
objects despite viewing them from varying distances and viewpoints. MR competence is linked to
superior sport and academic performance, and deficits can impact daily activities like driving. Previous
work using single-pulse TMS has confirmed the causal involvement of left and right dorsal premotor area
(PMd) and left superior parietal lobe (LSPL) in MR performance. In the present study, we aimed to
identify critical time windows during which these regions are essential for task performance. 30 healthy
right-handed young adults received suprathreshold single-pulse TMS to PMd and LSPL during a mental
rotation task, at 100, 200, 300 and 400 ms after stimulus presentation. TMS-induced disruption is
expected to delay response times, highlighting the temporal dynamics of perceptual-motor decision
processes. Preliminary results reveal that TMS to left PMD at 100 and 200 ms, slowed response times on
the task relative to Vertex, suggesting that left PMd contributes to early stages of visuospatial
transformation. Since left PMd affects right-sided movement , further investigations are underway
replicating this experiment in a new right-handed cohort performing the same task with a left-handed
response. Replicating our findings would confirm the specific causal involvement of left PMd in mental
rotation.
Introduction
Extracting meaningful features from histological images is a fundamental step to automate tissue analysis and classification. This feature extraction can be approached through nonlinear methods that can preserve complex biological structural information, or through linear methods that are directly interpretable and less computationally demanding. Understanding when nonlinear approaches provide meaningful advantages over linear alternatives is crucial for efficient analysis of large-scale brain datasets. In this work, we compare a nonlinear method, convolutional autoencoders (AEs), with a classical linear method, principal component analysis (PCA) to extract features from myelin-stained sections of the rat brain.
Methods
We used myelin-stained sections from 13 rat brains, which were scanned at 0.013 mm$^2$/px resolution. Both methods, AEs and PCA, were trained on patches from 9 brains (420 images) and tested on the remaining 4 brains (194 images). For each method, we tested two input patch sizes: 128x128 pixels and 256x256 pixels, with both configurations compressing the input to 256 features. We evaluated AEs and PCA on: 1) reconstruction quality both quantitatively (mean square error, MSE) and qualitatively; and 2) the biological relevance of the extracted features through clustering.
Results
PCA models were trained in less than 3 CPU days, while AEs training required 6 GPU days for 128x128 pixel patches and 9 GPU days for 256x256 pixel patches. AEs training used early stopping, halting when the validation error stopped improving for 10 consecutive iterations. PCA achieved lower MSE compared to AEs for both input patch sizes, but visual assessment revealed that AEs better preserved axonal structures while PCA produced smoother, more averaged reconstructions. When evaluating feature clustering, both methods obtained nearly identical clustering results using Gaussian Mixture Models. The clustering successfully separated white matter regions from grey matter areas, while also identifying subtypes within each tissue class based on myelin density and axonal distribution patterns.
Conclusions
Both PCA and AEs effectively extract features from myelin histology images. While PCA produces smaller MSE, AEs preserve fine textural details that may be critical for certain biological analyses. Despite these differences, clustering results converge, indicating both methods capture meaningful tissue organization through distinct strategies. For resource-constrained applications, PCA offers biologically meaningful compression with significantly reduced computational overload. AEs should be preferred when preserving structural details is essential. Future work could explore different clustering techniques to better understand how each method’s feature space relates to specific biological questions.
Background and Rationale: The hippocampus is a critical structure for memory, cognition, and emotional regulation, and its dysfunction is consistently implicated in major psychiatric disorders. Global hippocampal volume reduction is among the most robust neuroimaging findings in schizophrenia and bipolar disorder. However, conventional volumetric measures are inherently coarse: they collapse across distinct subfields and along the anterior–posterior axis, masking focal effects and preventing integration with molecular and functional data. This limited granularity hinders the development of mechanistic models of hippocampal pathology. To overcome these limitations, surface-based morphometry and multiscale contextualization are needed to capture the fine-grained topography of hippocampal alterations and to link them with underlying biology and function.
Methods: We will analyze large-scale 3T MRI data from individuals with schizophrenia, bipolar disorder, and matched healthy controls (e.g. B-SNIP cohort, HCP-EP). Using HippUnfold (https://github.com/khanlab/hippunfold), we will reconstruct hippocampal surfaces to quantify local thickness and gyrification across the hippocampal mantle. Case–control statistical comparisons will then be used to generate spatially detailed maps of structural alterations in schizophrenia and bipolar disorder. These alteration maps will capture subfield-specific and long-axis gradients of pathology, extending beyond global volumetric measures. To contextualize the findings, we will leverage Hippomaps (https://hippomaps.readthedocs.io), systematically compare shared and distinct multiscale associations between surface-based hippocampal alterations in schizophrenia and bipolar disorder.
Expected Results: We anticipate that surface-based analyses will reveal fine-grained alterations in hippocampal thickness and gyrification in schizophrenia and bipolar disorder that are not detectable with volume-based metrics. These patterns are expected to show spatial heterogeneity along the anterior–posterior axis and within subfields, reflecting distinct neurodevelopmental and pathophysiological mechanisms. We further hypothesize that this spatial heterogeneity will relate to cognitive symptom profiles, such that bipolar patients with more pronounced cognitive impairment resembling schizophrenia will also show more schizophrenia-like hippocampal alterations. Multiscale contextualization with Hippomaps is expected to demonstrate that structural alterations preferentially align with molecular and histological gradients, such as excitatory–inhibitory balance, synaptic density, or laminar differentiation. Furthermore, we expect altered hippocampal structure to map onto functional disruptions observed in fMRI and EEG, such as aberrant hippocampal–prefrontal connectivity and dysregulated oscillatory activity. Together, these results will provide mechanistic insights into the molecular and circuit-level underpinnings of hippocampal vulnerability in psychosis-spectrum disorders.
Conclusion: This project will deliver the first systematic surface-based characterization of hippocampal morphology in schizophrenia and bipolar disorder, integrated with histological, genetic, and functional data. By moving beyond coarse volumetric reductions, we aim to identify anatomically precise and biologically informed signatures of hippocampal pathology. This multiscale approach has the potential to refine models of disease progression, inform hypotheses about cellular and molecular mechanisms, and generate novel targets for translational research. Ultimately, these findings may help bridge the gap between structural imaging and neurobiological mechanisms, advancing our understanding of hippocampal dysfunction in psychosis-spectrum disorders.
Introduction
Despite representing only ~2% of body mass, the human brain consumes about 20% of the body’s total energy (Raichle 2006). Most of this energy fuels spontaneous activity at rest and is produced through oxidative phosphorylation (OxPhos), powered by mitochondria. A recently developed voxelwise atlas of mitochondrial respiratory capacity offers an unprecedented view of this key bioenergetic function (Mosharov et al. 2025). Yet, how this spatial distribution of mitochondrial features aligns with the brain’s functional and metabolic network organization remains unknown. Here, we test whether mitochondrial phenotypes are structured according to intrinsic network architecture.
Methods
We analyzed all gray matter voxels from a biochemically profiled human brain slab (n = 249; figure panel a), each characterized by six mitochondrial features: enzymatic activities of CI, CII, and CIV; mitochondrial density (MitoD); tissue respiratory capacity (TRC); and mitochondrial respiratory capacity (MRC). High-resolution 7T resting-state fMRI (n = 58 subjects) and dynamic [18F]FDG PET (n = 20 subjects) data were coregistered to the slab’s stereotaxic space to derive voxelwise Functional Connectivity (FC) and Metabolic Connectivity (MC) matrices (figure panel b). For each OxPhos feature, we first computed its correlation across nodes with the FC- or MC-weighted mean in each node’s network neighbors (figure panel c). Significant correlations would suggest network-driven effects in regional variability of mitochondrial features. We then constructed a Mitochondrial Profile Similarity (MPS) matrix by computing Pearson’s correlation between the mitochondrial feature profiles of each voxel pair. We applied Louvain community detection to both FC and MC (γ = 0.8–2.0, step 0.1) to identify network modules, and tested whether MPS values were greater within modules than between them (ΔMPS = average MPS within − average MPS between modules, figure panels d–i). All analyses accounted for spatial autocorrelation (SA) and network geometry using SA-preserving and degree- and edge-length-preserving null models.
Results
Across nodes, all mitochondrial features were significantly associated with the MC-weighted neighborhood averages (all p < 0.01 vs. both null models). For FC, significant effects were found for CI, CIV, TRC, and MRC (all p < 0.05 vs. both null models), suggesting that mitochondrial features, particularly oxidative capacity, are shaped by network-level interactions, with stronger effects for MC than FC. In the modularity analysis, FC showed higher within-modules than between-modules MPS across most γ values (p < 0.001), with effects strengthening at higher γ and pointing to increased mitotype coherence at finer community resolutions. For MC, significant effects were observed only at the finest resolutions, peaking at γ = 1.6 (p = 0.006), suggesting a higher degree of scale specificity.
Discussion
Our findings show that mitochondrial specialization is closely embedded within the brain’s functional and metabolic networks. Connectivity modules appear to act as bioenergetic niches, harboring distinct mitotypes. Regional differences in oxidative capacity are influenced by network-level interactions, suggesting that mitochondrial organization is not merely a local property but also reflects systems-level constraints. Together, these results provide a mechanistic link between intrinsic connectivity and energy metabolism, offering a framework for understanding how mitochondrial phenotypes support human brain function.
Background
Impulsivity is a multifaceted trait that emerges in childhood and is linked to several psychiatric disorders, such as attention-deficit/hyperactivity disorder and substance use disorders. Recent genomic and neuroimaging studies have identified genetic loci and brain systems associated with impulsivity and risk-taking behaviors. However, how these genetic underpinnings overlap across different facets of impulsivity and risk-taking, and how they are associated with brain morphology during early development remain unknown.
Methods
We applied genomic structural equation modeling to 17 impulsivity and risk-taking GWAS datasets to explore the overlapping genetic architecture underlying these phenotypes. We then computed polygenic scores (PGSs) for each genetic latent factor in 4,142 participants from the Adolescent Brain Cognitive Development Study and examined how each factor was associated with the brain structure during early development. We further tested whether socioeconomic status modulated the association between the PGSs and brain structures.
Results
We identified three genetic latent factors, which we labeled as lack of self-control, reward drive, and sensation seeking. These showed distinct associations with brain structure in late childhood and early adolescence. Specifically, lack of self-control PGS was related to reduced cortical thickness and surface area, while reward drive PGS was associated with heightened cellularity in subcortical structures. Finally, sensation seeking PGS showed positive association with cortical surface area and higher white matter integrity. Interaction analysis revealed that the association between PGS of lack of self-control and white matter mean diffusivity was modulated by socioeconomic status.
Conclusions
Our findings revealed that genetic predisposition for impulsivity and risk-taking is associated with morphological brain differences as early as ages 9–10. We also highlighted the importance of capturing the multidimensional nature of these traits to better understand their neurodevelopmental basis.
The BigBrain dataset represents the first ultrahigh-resolution 3D model of the human brain at 20 µm isotropic resolution, reconstructed from 7,404 histological sections of a human post-mortem brain. This unique dataset provides the basis for cytoarchitectonic mapping at a level of anatomical detail that bridges microscopic cellular organization with macroscale brain imaging. Traditionally, cytoarchitectonic areas have been delineated on individual histological sections, resulting in 2D maps that are difficult to integrate into 3D brain reference spaces.
To address this, we applied the AtLaSUi tool to reconstruct delineated BigBrain areas in full 3D. This workflow transforms manual 2D annotations into volumetric, topologically consistent maps that preserve the fine-grained borders of cortical regions. The resulting 3D maps enable spatially continuous visualization of cortical areas and facilitate direct comparison with structural and functional neuroimaging data.
The reconstructed areas are part of the Julich-Brain Atlas, a continuously expanding cytoarchitectonic atlas of the human brain. All maps are openly available through the EBRAINS research infrastructure and can be explored, accessed, and programmatically queried via the siibra tool suite. By making these maps accessible in a standardized 3D reference space, we contribute to the integration of microstructural data with multimodal neuroimaging and to the advancement of open, reproducible neuroscience.
Cortical thinning is associated with pruning, neuroplasticity, and cognitive decline. While age is a crucial predictor of thinning, it does not account for all its variability. We developed models of cortical thinning based on temporal and spatial variables, including age, cortical type, lobes, brain structures, curvature, and laminar architecture. We utilized MRI scans from 871 participants without neurological history to estimate annual cortical thinning across the lifespan, alongside laminar architecture profiles from the BigBrain dataset. Subsequently, we employed a Gradient Tree Boosting algorithm to predict thinning using three different feature set: temporal, spatial, and temporal-spatial. The temporal model (age as the only variable) achieved an r-squared of 0.79, the spatial model (all variables except age) had a score of 0.58, and the temporal-spatial reached 0.84. Through the use of Shapley additive explanations in the temporal-spatial model, we see the contribution and interactions of each variable to cortical thinning. Age was the feature that most contributed to the cortical thinning, followed by layer I thickness, cortical thickness at 10y.o. and layer IV thickness. Our examination suggests regions that experience more thinning during development tend to undergo less thinning during aging, and this correlation is linked to Layer I thickness.
INTRODUCTION.
The superficial white matter (SWM) immediately beneath the cortical mantle harbors short-range U-fibers that interconnect adjacent cortical regions, thereby supporting local information integration. Due to the key role of U-fibers in brain plasticity and aging, alterations in their density are observed in various disorders[1,2,3]. Despite its importance, the SWM has been relatively understudied compared to deep white matter, in part due to its thin, heterogeneous structure and the challenges it poses for in-vivo imaging. Recent advances in high-resolution histology and surface-based analysis, such as those enabled by ultra-high field 7T magnetic resonance imaging (MRI) and the BigBrain 3D histology dataset, have provided unprecedented opportunities to characterize SWM organization at multiple depths from the cortical surface[4]. Nevertheless, reliably separating and quantifying short-range U-fibers from long-range fibers within the SWM remains technically challenging. Building on these advancements, we adapted a framework[5] to separate and quantify short-range U-fibers and long-range fibers within the human SWM and examined how their distribution is shaped by cortical geometry.
METHOD.
This study utilized MRI data acquired at the Montreal Neurological Institute using a Siemens Terra system with both 3T and 7T scanners. The dataset comprised eleven healthy participants (4 females) with a mean age of 30.1±5.52 years. The following scans were collected: (i) 7T T1-weighted (T1w) images and (ii) 3T diffusion-weighted images. All MRI data were preprocessed using the micapipe pipeline[6]. We separated and quantified U-fibers and long-range fibers in the SWM by first solving the Laplacian equation across the white matter and shifting an existing surface along the resulting gradient using T1w-images. Using this Laplacian field, we computed streamlines extending from the white matter surface toward the subcortical regions and subsequently warped these streamlines into the diffusion MRI space. We then extracted apparent fiber density[7] values from fixels derived via multi-shell, multi-tissue fiber orientation distribution. Based on their orientation relative to the Laplacian streamlines, fibers aligned parallel were identified as radial fibers connecting the cortex to deep white matter (long-range fibers), while those perpendicular were classified as tangential to the cortex (i.e., putative U-fibers).
RESULTS.
The fiber density maps, representing the density of the corresponding fiber populations (at 2 mm underneath the gray/white matter border), are shown in Fig. 1A. To evaluate the influence of brain geometry on SWM fiber density, we examined the correlation between each fiber density map and local cortical curvature (Fig. 1B). Long-range fibers showed a mild correlation with curvature, which was not statistically significant within sulcal surfaces. In contrast, U-fibers exhibited a moderate correlation with curvature, with significant associations observed in both gyral and sulcal regions.
DISCUSSION.
Our study applied a framework for separating and quantifying U-fibers and long-range fibers within the SWM using combined 3T and 7T MRI. The findings on the influence of brain geometry on SWM fiber density are consistent with previous work[8] and suggest that cortical geometry may differentially constrain the organization of short- and long-range fibers, with this framework providing a promising tool for further refinement using histology or cytoarchitectonic data to elucidate SWM microstructure.
Objectives
Genetic risk factors of Parkinson's disease (PD) may influence disease susceptibility through various mechanisms throughout the lifespan. However, how different mechanisms contribute to PD development remains unclear. Here we aimed to investigate relationships between brain structure and PD and distinguish between neurodevelopmental and later-life mechanisms underlying genetic PD risk.
Methods
We performed two-sample Mendelian randomization (MR) to assess potentially causal relationships between brain morphometry and PD in the UK Biobank. We then divided PD risk genes into those involved in mitochondrial, lysosomal, and autophagy functions and compared their developmental expression trajectories to all other PD risk genes using the BrainSpan dataset and performed gene-set analyses using multimarker analysis of genomic annotation (MAGMA) to characterize the underlying biological processes of remaining PD risk variants.
Results
MR revealed potentially causal positive associations between larger cortical surface area and subcortical volumes and increased PD risk. Additionally, mitochondrial, lysosomal, and autophagy pathway genes showed lower fetal expression which increased after birth, while other genes maintained stable expression throughout development. Gene-set analysis further revealed that while pathway-specific variants were enriched for mechanisms such as oxidative stress responses, the remaining variants were enriched for neurodevelopmental processes including neural progenitor cell division, microtubule organization, and synaptic development.
Conclusions
Our findings demonstrate that genetic predisposition toward larger brain structures may causally increase susceptibility to PD. Expression patterns suggest PD genetic risk operates through distinct mechanisms whereby some genetic variants increase PD risk by influencing early neurodevelopmental processes determining brain structure, while others contribute through later-life cellular dysfunctions. These results provide novel insights into how genetic risk factors shape PD vulnerability across the lifespan, implicating both developmental brain architecture and adult cellular maintenance in disease susceptibility.
p-HCP (Prenatal Human Connectome Patterns) is a high-resolution multimodal MRI dataset of human fetal brain development spanning the second half of gestation. Acquired ex vivo at ultra-high field strength (11.7 T), this dataset includes whole-hemisphere images at 100–200 μm isotropic resolution: anatomical scans, quantitative relaxometry maps, and high-angular-resolution diffusion imaging across multiple b-values. By offering whole-brain, isotropic, multimodal images at an unprecedented resolution, p-HCP will enable new insights into the spatiotemporal dynamics of prenatal brain development. It represents a valuable resource for building mesoscopic brain atlases and advancing our understanding of human neurodevelopment at a stage previously inaccessible to such detailed explorations.
The second half of gestation is a vital period of neurodevelopment involving processes such as neuronal migration, synaptogenesis, and axonal growth. Capturing these transient events in 3D across the whole brain has remained a challenge. While histological methods allow detailed microstructural insight, they typically lack whole-brain volumetric coverage. Conversely, in utero MRI has enabled full-brain 3D imaging but remains limited to macroscopic resolution (~1 mm) due to motion and safety limitations.
This dataset addresses a critical gap in developmental neuroimaging by capturing fetal brain development at a mesoscopic scale, which should help bridge the gap between in vivo fetal imaging and histological microscopy. Our approach leverages ex vivo MRI over extended scan times to acquire comprehensive multimodal 3D imaging at high resolution, allowing detailed visualization and quantitative assessment of developing brain structures. The few existing fetal studies that used ultra-high-field MRI have focused on the first and early second trimesters of gestation, because older brains are typically too large for small-bore preclinical scanners. To overcome this size limitation, we adapted a blockwise acquisition and digital reconstruction method, previously pioneered in an adult brain known as the Chenonceau dataset. The brains were sectioned into blocks, each block was imaged separately in a small-bore 11.7-tesla scanner, and a dedicated semi-automatic image registration and data fusion pipeline was used to reconstruct whole-hemisphere images.
The acquired imaging modalities provide rich information on the tissue composition and microstructure: with quantitative relaxometry, $\text{T}_2^*$ is sensitive to iron content and myelin, $\text{T}_1$ reflects macromolecular environments, and $\text{T}_2$ informs on water content and microstructure. Multi-shell diffusion imaging (b = 1500, 4500, 8000 s/mm²) and high angular resolution (90 directions at b = 8000 s/mm²) will enable detailed tractography and multi-compartment diffusion models. The quantitative multimodality gives access to microstructural modelling, paving the way for a better understanding of the brain tissue composition and neurodevelopmental dynamics occurring during the last two trimesters of gestation.
The initial data release available on the EBRAINS platform features three brains at 18, 27, and 31 post-conceptional weeks, with a complete set of anatomical and relaxometry data, and metrics of the diffusion tensor imaging (DTI) model. Future data releases will include additional specimens covering the developmental timeline, multi-shell models of the diffusion signal, tractography, correlative histological data from the same brains, and segmentations of brain structures, turning this dataset into a full-featured developmental atlas.
The human prefrontal cortex (PFC) plays a central role in cognitive control and emotion regulation, supported by a protracted and heterogeneous maturation across its subregions (e.g., dorsolateral vs. ventromedial, medial vs. lateral) [1, 2]. This intra-PFC variability is a hallmark of its functional specialization [3, 4]. Importantly, the developmental trajectory of the PFC is shaped not only by intrinsic cortical mechanisms but also through dynamic interactions with subcortical structures such as amygdala, thalamus, and basal ganglia [5]. Yet, most current models of brain development remain cortico-centric, frequently overlooking how the timing and strength of subcortical inputs influence the specialization of different PFC subregions. To address this gap, we propose a novel framework for quantifying cortico-subcortical co-maturation using the Wasserstein distance, a metric from optimal transport theory [6]. This approach is designed to measure divergence between normative developmental trajectories of cortical and subcortical regions, enabling a more precise mapping of how subcortical inputs scaffold PFC maturation. We will apply this framework to high-resolution morphometric and diffusion MRI data from large-scale open-access datasets, including the Human Connectome Project – Youth and Young Adult cohorts [7, 8]. Analyses will examine whether mismatches in maturation patterns differentiate PFC subregions. Methodologically, our approach will be compared to existing tools such as MIND (Morphometric INverse Divergence) to assess robustness [9]. Crucially, we will validate anatomical plausibility against the histological BigBrain dataset, focusing on laminar thickness, cytoarchitectural boundaries, and regional differentiation within the PFC [10]. By anchoring MRI-based maturation measures to cellular-scale histology, we aim to ensure that our framework captures biologically meaningful patterns of development. Conceptually, this project contributes a methodological foundation for modeling cortico-subcortical co-development at high spatial resolution. Beyond advancing basic neuroscience, it may provide insights into sensitive developmental periods and mechanisms of neuropsychiatric vulnerability.
Reference:
1. Kolk, S. M. & Rakic, P. Development of prefrontal cortex. Neuropsychopharmacology 47, 41–57 (2022)
2. Sydnor, V. J. et al. Heterochronous laminar maturation in the human prefrontal cortex.
Preprint at https://doi.org/10.1101/2025.01.30.635751 (2025).
3. Deco, G. et al. One ring to rule them all: The unifying role of prefrontal cortex in steering
task-related brain dynamics. Prog. Neurobiol. 227, 102468 (2023)
4. Kringelbach, M. L. & Deco, G. Prefrontal cortex drives the flexibility of whole-brain orchestration of cognition. Curr. Opin. Behav. Sci. 57, 101394 (2024)
5. Chin, R., Chang, S. W. C. & Holmes, A. J. Beyond cortex: The evolution of the human brain. Psychol. Rev. 130, 285–307 (2023).
6. Panaretos, V. M. & Zemel, Y. Statistical Aspects of Wasserstein Distances. Annu. Rev. Stat. Its Appl. 6, 405–431 (2019).
7. Van Essen et al. The Human Connectome Project: A data acquisition perspective. NeuroImage 62 4, pp. 2222-2231 (2012).
8. Somerville et al. The Lifespan Human Connectome Project in Development: A large-scale study of brain connectivity development in 5-21 year olds. Neuroimage 183: 456–468. (2018)
9. Sebenius, I. et al. Robust estimation of cortical similarity networks from brain MRI. Nat. Neurosci. 26, 1461-1471 (2023).
10. Amunts, K. et al. BigBrain: An Ultrahigh-Resolution 3D Human Brain Model. Science 340, 1472–1475 (2013).
We present a scalable computational framework for simulating brain dynamics within structurally complex regions such as the hippocampus, integrating high-resolution multimodal data—including BigBrain-derived surface meshes and diffusion MRI tractography—into The Virtual Brain simulator. This Region Brain Network Model (RBNM) framework enables vertex-level placement of neural mass models (NMMs) informed by region-specific connectivity, anatomical subfields, and morphological descriptors. Our layered architecture supports biologically grounded simulations of EEG, MEG, and BOLD signals, validated against empirical recordings across frequency bands. Compared to standard whole-brain simulators, RBNM enhances anatomical precision, simulation fidelity, and regional interpretability. We benchmark the framework using hippocampal data and achieve high spectral correlation with intracranial EEG (iEEG). The code and data pipelines are openly available to support reproducibility and adoption in the neuroscience community.
Introduction
Segmenting cellular structures in electron microscopy (EM) images is crucial for studying the morphology of neurons and glial cells in both healthy and diseased brain tissues. Traditionally, this task has relied on manual annotation, requiring neuroanatomy experts to examine images slice by slice and delineate each structure individually. However, manual annotation is highly labor-intensive and time-consuming. To address this, deep learning methods such as convolutional neural networks (CNNs) have been adopted for EM image segmentation. Nevertheless, CNNs are often criticized for their reliance on local feature extraction, which limits their ability to capture long-range dependencies and global context within an image. Recently, transformer-based models with self-attention mechanisms have emerged as powerful alternatives, enabling more effective integration of both local and global information. Building on this progress, the Segment Anything Model (SAM) has shown strong performance in natural image segmentation. In this work, we investigate the application of SAM to microscopy images and assess its potential to enhance segmentation accuracy and efficiency in neuroanatomical studies.
Materials & Methods
This study builds on two pioneering approaches: the SAM and Segment Anything for Microscopy (Micro-SAM). We fine-tuned and evaluated the models from Micro_SAM on in-house serial block-face scanning electron microscopy datasets with a cutting interval of 40 nm. Dataset A, from the CA1 region of the hippocampus of a healthy rat, contained 1044 slices and was used as a control to characterize normal dendritic structures. Dataset B, with 698 slices, was collected from the same region after pilocarpine-induced status epilepticus in a rat. Dataset C, comprising 697 slices, was derived from a cortical layer II biopsy of a patient with idiopathic normal pressure hydrocephalus obtained during shunt surgery. The pixel size was 15 × 15 nm² for datasets A and B, and 10 × 10 nm² for dataset C. Model training and internal evaluation were performed on dataset A, while external evaluation was conducted on datasets B and C. After training, the model performance was assessed using the object-level error metric [1].
Results
By deriving prompts from ground-truth masks, and using the mask quality of bounding-box prompts as a benchmark, our fine-tuned ViT-B and ViT-L models exhibited approximately 19.1% and 20.8% improvements over the original SAM, and 150.1% and 181.9% improvements over Micro SAM models, respectively. This is because two Micro_SAM models used in this study were trained for organelles. In addition, a user study indicated that our fine-tuned model enables human annotators to generate segmentations more consistent with ground truth while requiring less annotation time.
Discussion
Our previous study showcased that automatic inference produced inferior results compared to interactive prompting, largely due to the limitations of grid-point based auto-prompting. To overcome this, we integrated the object detection model You Only Look Once (YOLO), which generates bounding-box prompts as inputs. This strategy compensates for the less competitive results of automatic inference, while also enabling the generation of additional pseudo-masks to enhance performance in subsequent training stages. For the next stage, we aim to generate more masks and construct 3-dimensional dendrite connectome maps.
Naturalistic stimuli, such as movie-watching, amplify behaviorally relevant brain networks and sensory-driven cortical hierarchies (1). However, the large-scale organizational principles that govern how functional connectivity (FC) reconfigures in such contexts remain incompletely understood. Here, we leveraged ultra-high-field (7T) fMRI data from the Human Connectome Project (HCP) (2) and the Precision NeuroImaging (PNI) datasets (3) to investigate how FC changes between rest and movie-watching, and how these differences relate to cortical hierarchy and geodesic distance (GD).
We first analyzed resting-state and movie-watching 7T fMRI data from 93 healthy adults in the HCP dataset. Using the Glasser 360-region atlas, whole-cortex FC matrices were computed for each condition (Figure 1A), and state-dependent differences were quantified using the movie–rest difference (MRD) for each cortical parcel. We selected three representative regions of interest (ROIs) spanning major cortical networks, including V1 (visual), MIP (dorsal attention), and 31pv (default mode). For each ROI, we assessed how MRD varied across the cortex using a linear mixed model and examined its relationship to cortical distance and functional hierarchy (4, 5).
Spearman correlation analysis revealed high similarity in FC between rest and movie states for the ROIs V1 (rho = 0.77, pspin < 0.001), MIP (rho = 0.89, pspin < 0.001), and 31pv (rho = 0.86, pspin < 0.001). However, MRD showed state-dependent variation that was modulated by the hierarchical level of the seed ROI (Figure 1B). In unimodal regions such as V1 and MIP, MRD decreased with geodesic distance (slopeV1 = –0.06, slopeMIP = –0.034; Figure 1C), indicating stronger reconfiguration in nearby areas. In contrast, transmodal ROIs such as 31pv in the default mode network showed increasing MRD with distance (slope31pv = 0.045,), suggesting broader, long-range reorganization. These spatial associations persisted after controlling for differences in FC amplitude between states. Overall, the slope of the MRD–GD relationship scaled with hierarchical position, with more transmodal regions exhibiting flatter or positive slopes, and more sensory–like regions showing steeper negative gradients. In addition, we examined whether MRD patterns reflected underlying cytoarchitectural features using the principal gradient of the BigBrain histological atlas (6, 7). We observed modest correlations between MRD and BigBrain principal gradient in unimodal regions (V1: rho = –0.36, pspin < 0.001; MIP: rho = –0.23, pspin = 0.011), but no significant association in transmodal areas (31pv), suggesting structure–function coupling is strongest at lower hierarchical levels. Finally, we conducted a replication analysis using data from PNI dataset (3) and observed consistent results.
Together, these findings highlight that naturalistic stimulation induces spatially structured and hierarchy-dependent reconfiguration of functional networks. FC changes are most tightly constrained by geodesic and cytoarchitectural gradients in unimodal cortex, but become increasingly distributed and decoupled in higher-order regions (8, 9). Our results support emerging models of large-scale cortical organization and underscore the importance of spatial and hierarchical context in interpreting dynamic functional connectivity.
The hippocampal formation (HF) plays a pivotal role in different aspects of memory, with its subdivisions having various functional implications. The hippocampus has been parcellated in different ways both in histological and MRI studies [1, 2]. In the BigBrain, 3D rendering of the hippocampus was performed with its subdivisions being revealed through unfolding and unsupervised clustering of laminar and morphological features [3]. However, this parcellation was not detailed enough, e.g. in the field of the subicular complex (subiculum).
We cytoarchitectonically identified and mapped in 10 postmortem brains and generated probabilistic maps of CA1, CA2, CA3, CA4, Fascia dentata (FD), prosubiculum (ProS), subiculum (Sub), presubiculum (PreS), parasubiculum (PaS), transsubiculum (TrS), hippocampal-amygdaloid transition area (HATA) and entorhinal cortex (EC) [4]. Based on this research, we generate the 3D maps of HF in the BigBrain template to study the extent, topography and neighborhood relationships of the structures.
Cytoarchitectonic mapping of 12 structures was performed in at least each 15th serial histological sections in the web-based annotation tool MicroDraw at 1-micron resolution in-plane in the BigBrain. Subsequently, a Deep Learning Workflow was applied to 3D-reconstruct the structures. Convolutional Neural Networks were used for image segmentation in the sections lying between those manually mapped [5]. The annotations of each structure were non-linearly transformed to the sections of the 3D reconstructed BigBrain space at 20-micron isotropic resolution [6], and was further visualized using the ATLaSUI.
We have identified 12 cytoarchitectonic structures of HF in the BigBrain and analyzed their macroanatomy (Fig. 1). Fasciola cinerea (FD in its mediocaudal extension) was larger in the left hemisphere, while it was minuscule on the right (Fig.1A). Left ProS extended onto dorsomedial surface of the parahippocampal gyrus (PHG), while the right ProS almost does not appear on the surface (Fig.1B). Caudally, PreS occupied medial surface of the PHG. TrS abutted on PreS ventrally. Caudal TrS bordered the temporo-parieto-occipital proisocortex laterally (Fig.1A), while rostral TrS abutted upon area 35. PaS replaced TrS rostrally. The detailed mapping of HF reflected a transition from the allocortex (ProS and Sub) to the periallocortex (PreS, PaS) within the subicular complex that traditionally was considered as a cytoarchitectonic unit. Rostrally, both hemispheres had three Digitationes hippocampi respectively (Fig.1C).
The high-resolution (20 μm) whole-brain histological references of HF are generated on the basis of the BigBrain. They will be publicly available on the EBRAINS platform and integrated with the BigBrain model. The maps of HF can extend those of the piriform cortex in the BigBrain, being two hubs of limbic system [7]. The references can serve for high-resolution MR imaging, and be basis for brain simulation and data integration.
Background. Recent neuroimaging research has witnessed a surge of studies aiming to shed light on principles of structure-function coupling in the human brain. Central to these investigations is understanding how relatively fixed anatomical features contribute to highly dynamic patterns of activity (1). This question remains particularly elusive in transmodal areas supporting higher-order cognitive functions such as memory, executive function, and attention (2, 3). Among these regions, the salience network (SN) has been proposed to mediate switching between functional states (as proposed in the triple network model (4) and large-scale signal propagation frameworks (5)), making it an ideal system-level model of dynamic structure-function coupling. Yet, the neuroanatomical features enabling the SN to regulate brain-wide transitions remain poorly understood.
Aims/hypotheses. We will develop a novel multiscale connectomics framework to investigate how fixed microarchitectural and connectivity features support flexible functional dynamics. Focusing on the SN and its role in regulating transitions between the default mode network (DMN) and central executive network (CEN), we hypothesize that the SN exhibits characteristic microarchitectural and connectomic features that allow it to mediate transitions between large "task-positive" and "task-negative" systems.
Methods. We will integrate state-of-the-art methodologies across multiple brain organization levels. First, SN microarchitecture will be characterized using intracortical profiling of post-mortem histology, leveraging the BigBrain (6) and AHEAD (7) datasets. These findings will be extended in vivo using 7T (8) myelin-sensitive MRI to quantify microstructural variability within the SN across individuals and evaluate robustness of histological findings. Our preliminary findings show that specific subregional patterns of local circuit properties within the SN support its flexible engagement with distributed functional systems. Second, we will comprehensively map SN connectivity, by modeling white matter pathways with diffusion MRI tractography and local connections using surface geometry eigenmodes (9). Together, these approaches will determine whether the SN occupies a topologically advantageous position within the structural connectome, facilitating parallel interactions across systems. Third, we will investigate how the structural architecture of the SN shapes functional dynamics and supports network switching. We will simulate whole-brain state transitions using network control theory (10). Specifically, we will examine the capacity of SN nodes to facilitate transitions between empirically defined brain states, such as shifts from DMN to CEN states. This approach will test whether the SN's structural embedding and microarchitecture confers the SN a unique capacity to coordinate large-scale functional reconfigurations.
Outcomes. Our work proposes a novel framework for understanding the joint contributions of microarchitecture, geometry, and connectivity to dynamic cross-network interactions in the human brain. It will provide a foundational account of how subregional patterning of the SN may contribute to its capacity to modulate the engagement of distinct macroscale functional systems. More broadly, our findings will demonstrate how local structural topography and connectivity shape and constrain large-scale neural dynamics, opening new areas of investigation into the structural basis of cognitive flexibility (11).
High-resolution 3D reconstructions not only provide insights into the geometric and topological properties of vascular networks but also enable quantitative analysis essential for disease diagnosis, surgical planning, and personalized therapeutic strategies. Within the framework of the BigBrain project, which seeks to generate ultra-high-resolution models of the human brain at 1 µm isotropic resolution, vascular reconstruction is particularly significant for refining anatomical context, supporting multimodal data integration, and advancing computational modeling of cerebrovascular function.
In this work, we present a robust methodology for reconstructing vascular structures and extracting their centerlines from a set of 100 consecutive 1 µm-thick vascular slice images, each with a resolution of 512×512 pixels and a spacing of one voxel along the z-axis. Our approach combines classical image processing techniques with modern data-driven modeling to address the challenges posed by noise, irregular boundaries, and inter-slice variability. First, each 2D slice is preprocessed through grayscale normalization, binarization, denoising, and morphological operations to identify the largest connected vascular region. The maximum inscribed circle within this region is then determined, providing an estimate of both the slice-specific radius and the geometric center. By stacking the geometric centers across slices, we generate a discrete trajectory of vascular centroids in 3D space.
To model the vascular centerline, we initially employ polynomial curve fitting across x–z and y–z coordinates. After evaluating multiple orders, a sixth-order polynomial is selected as the optimal balance between smoothness and accuracy, yielding coefficients of determination (R²) above 0.99 and mean absolute errors below 5 pixels. While polynomial fitting effectively captures the global trajectory, it exhibits sensitivity to local noise and oscillatory artifacts (Runge’s phenomenon). To overcome these limitations, we further apply a machine learning–based regression strategy using random forests. This approach preserves both local geometric detail and global smoothness, reducing fitting error by approximately 40% compared to polynomial models. The random forest–based method achieves R² values above 0.998 for both x- and y-coordinates, demonstrating superior robustness and denoising capability.
The reconstructed vascular radius averages approximately 30.1 pixels (≈30 µm in real scale), with only minor fluctuations across slices. Orthogonal projections of the 3D centerline onto XY, YZ, and ZX planes provide an intuitive visualization of vascular curvature and spatial continuity. Moreover, the proposed methodology enables the generation of realistic vascular tube models by extruding the fitted radius along the reconstructed centerline. These models enhance interpretability and facilitate integration into multimodal brain atlases such as BigBrain, where vascular context is indispensable for accurate localization and functional annotation.
Our contributions are threefold: (1) a mathematically principled framework for radius estimation and centerline modeling from serial 1 µm histological sections; (2) the introduction of machine learning–based regression to improve stability and robustness of centerline fitting; and (3) comprehensive quantitative evaluation using MAE, RMSE, and R² metrics, validating the accuracy and reproducibility of the reconstruction. Future work will extend this methodology to larger-scale vascular networks within the 1 µm BigBrain dataset and explore integration with multimodal imaging modalities such as MRI and micro-CT.
Introduction:
Among their many applications,interpolation methods have proven particularly valuable in neuroimaging for frame rate enhancement, detailed reconstruction of 3D datasets, and replacing missing data. The latter problem becomes critical when dealing with ultra-high (cellular) resolution brain images. One such application is with the BigBrain, a 3D reconstruction of 7404 histological brain sections at 20-micrometer isotropic resolution [1]. The original sections of the BigBrain were re-scanned at 1 micron in-plane, and images were captured through selected slices at different depths with an optical microscopy technique. To deal with the large size of data, slices were divided into patches which were aligned and stacked to form 6x6x6mm^3 volumes at 1-micron isotropic resolution.
In this work, we present a novel, deep learning-based method for near-duplicate image synthesis[2,3,4] with bi-directional Flows of Feature Pyramid (FFP)[4] and Adaptive Feature Learning (AFL)[2] algorithm designed to replace missing data and create seamless and smooth 3D blocks of 1-micron isotropic BigBrain.
Methods:
Two 6x6x6mm^3 blocks of the BigBrain were downloaded from EBRAINS (https://ebrains.eu/): one at 2-micron (25 GB), and the other at 8-micron isotropic as a proof of concept. All codes were run on two NVIDIA GeForce GTX 1080 GPUs, CUDA Version 12.4.
We adopt a multi-scale feature extractor based on a feature pyramid architecture to accurately model motions of varying magnitudes, ranging from subtle movements to large-scale displacements. Built upon this, a scale-agnostic bi-directional motion estimator is employed to effectively handle both small and large movement.
To ensure visually coherent synthesis, we integrate Gram Loss, Gradient Loss, and Perceptual Loss into the optimization process. Gram Loss facilitates global texture preservation, Gradient Loss retains local edge details,and Perceptual Loss emphasizes textures and overall appearance.
Additionally, adaptive loss functions are introduced to focus on high-frequency or critical regions, providing flexibility during optimization. This adaptability improves robustness and enhances generalization and performance across diverse scenarios.
Results:
Using the motion estimator leverages the feature pyramid to align frames by predicting motion vectors at multiple scales, and corrects for any residual motion in the 3D blocks (Figure 1). The use of a combination of losses enhances the quality and consistency of the interpolated frames to yield a smooth transition between frames (Figure 2).
Most neuroimaging studies rely on group averages, yet individuals differ substantially in brain anatomy, function, and behavior. In clinical populations, where single-subject diagnostics and biologically informed stratification are essential, this variability cannot be ignored. My talk will highlight efforts in my lab to develop and apply individualized neuroimaging methods.
In the first part, I will focus on modeling fMRI signals and effective connectivity between brain regions. Physiologically informed generative models, such as dynamic causal modeling (P-DCM), enable inference of neuronal states and connectivity from BOLD signals. In Major Depressive Disorder, we found that connectivity-based features outperform conventional activation measures, distinguishing patients from controls and revealing biologically meaningful subgroups. I will also discuss high-resolution 7T fMRI for probing depth-specific responses across cortical layers and demonstrate how ultra-high-resolution datasets such as BigBrain can guide the development of region-specific microcircuit models.
In the second part, I will present anatomical studies with clinical applications. I will show how 7T MRI improves visualization of subcortical structures, evaluate MRI contrasts for thalamic segmentation (and relate them to the BigBrain), and introduce a framework that combines electrode segmentation with 3D inpainting to correct metal artifacts in post-operative DBS scans. Achieving ~98% segmentation accuracy, this approach restores artifact-affected regions with high fidelity, enhancing electrode localization and the reliability of outcome assessments.
Together, these advances demonstrate how precision neuroimaging can move beyond group-level findings to provide individualized biomarkers that improve diagnosis, guide neurosurgical interventions, and ultimately transform patient care.
Short Bio:
Kâmil Uludağ studied Physics at the Technical University of Berlin, completed his Ph.D. in Medical Physics in 2003 on Near-Infrared Optical Spectroscopy (Charité, Humboldt University, Berlin) and moved for a postdoc position to the Center for Functional MRI (UCSD, San Diego, USA) to work on the physiological and physical basis of functional MRI. In 2004, he was appointed Head of Human Brain Imaging group at the Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany. In June 2010, he became Associate Professor in the Faculty of Psychology & Neuroscience and Head of the Department of Cognitive Neuroscience continuing his work on the basis of fMRI utilizing the new Ultra-High Field human MRI scanners (7 and 9.4 Tesla). Since 2019, he is Senior Scientist at the University Health Network, Toronto, and Full Professor at the Department of Medical Biophysics, University of Toronto. Since 2024, he was appointed the Scientific Director of human 7T MRI scanner at the Sunnybrook Research Institute. Dr. Uludağ has been recently appointed Senior Fellow of the International Society for Magnetic Resonance Imaging in Medicine (ISMRM) and Canada Research Chair Tier 1.
Dr Uludağ will present the Sievers Lecture in Computational Neuroscience.
(Immuno-)histological and magnetic resonance imaging (MRI) research both provide information on the functional neuroanatomy of the human brain. Microscopy techniques provide an unmatched level of anatomical detail but are usually limited by low numbers of observations. MRI research does not achieve the same level of detail but provides us with insight in interindividual variation through larger numbers of observations. Integrated approaches allows the bridging between these complementary imaging modalities.
In our research we combine techniques. Detailed information is acquired from 7 individual post mortem brains which undergo quantitative 7 Tesla MRI data at 400 um isotropic resolution. After serial coronal cutting, we create integrated full 3D reconstructions based on blockface images, and the immunoreactivity of calcium binding proteins (parvalbumin, calretinin, calbindin), Alzheimer related neuropathology (amyloid beta, pTau), vascular markers (CD31 and smooth muscle actin) and/or Bielschowsky and Nissl staining. Coregistration of the microscopy and MRI data at a 200 um resolution in blockface space allows the subseqent transfer of the data to MNI-space (1,2). The resulting datasets can be used for MRI-validation, as well as for brain atlasing purposes.
The acquired post mortem datasets are currrently being used to further advance our atlasing efforts, and to bring the data together with the previously published in vivo Amsterdam Ultra-high field adult lifespan database (AHEAD). The AHEAD dataset consists of 105 7 Tesla whole-brain datasets at 0.7 mm isotropic resolution, and was recently extended with slab quantitative MRI contrasts covering the subcortex at a 0.5 mm isotropic resolution (3,4). Our in vivo atlasing efforts have been funneled into the MASSP 2.0 algorithm that now allows the automated parcellation of 35 individual structures in both cerebral hemispheres (3,5). Finally, our post mortem resources now allow the retraining and expansion of the algorithm using post mortem delineations.
The resulting brain models provide us with the best of both worlds, and can be applied to create advanced atlasing tools for application in neuroimaging research and clinical applications (Fig 1). Given the labour intensive nature of whole-brain histological approaches, individual research initiatives can only provide a limited number of donor brains. Open access publication and sharing of the datasets and derived algorithms and atlases will strongly benefit the progress of the research field.
1. Alkemade, A. et al. 7 Tesla MRI Followed by Histological 3D Reconstructions in Whole-Brain Specimens. Front Neuroanat 14, (2020).
2. Alkemade, A. et al. A unified 3D map of microscopic architecture and MRI of the human brain. Sci Adv 8, 7892 (2022).
3. Bazin, P.-L. et al. Automated parcellation and atlasing of the human subcortex with ultra-high resolution quantitative MRI. Imaging Neuroscience (2025) doi:10.1162/IMAG_A_00560.
4. Alkemade, A. et al. The Amsterdam Ultra-high field adult lifespan database (AHEAD): A freely available multimodal 7 Tesla submillimeter magnetic resonance imaging database. Neuroimage 221, 117200 (2020).
5. Bazin, P.-L., Alkemade, A., Mulder, M. J., Henry, A. G. & Forstmann, B. U. Multi-contrast anatomical subcortical structures parcellation. Elife 9, (2020).
Advances in microscopic imaging and high-performance computing allow analyzing the complex cellular structure of the human brain in great detail. This progress has greatly aided in brain mapping and cell segmentation, and the development of automated analysis methods. However, histological image data can contain data gaps due to inevitable processing artifacts, which, despite careful precautions, may arise during histological lab work, such as missing sections, tissue tears, or inconsistent staining.
To address this issue, we presented a convolutional neural network model that reconstructs corrupted data from surrounding tissue, while preserving precise cellular distributions. Our approach uses a denoising diffusion probabilistic model trained on light-microscopy scans of cell-body stained histological sections. We extended this model with the RePaint method to impute corrupted image data. We evaluate its performance with established deep learning models trained on the same type of histological data.
A key challenge of our initial model was its difficulty in accurately reconstructing tissue boundaries and larger anatomical structures such as blood vessels. We address these challenges by an enhanced diffusion-based model that incorporates contextual information from adjacent sections of the brain. This model integrates three tissue patches from neighboring sections using a siamese network architecture with cross-attention mechanisms. Leveraging spatially aligned information across consecutive sections, our approach achieves a more anatomically coherent reconstruction.
We demonstrate that our model significantly improves realism and anatomical plausibility of reconstructed cellular distributions, as measured by both cell density prediction and brain area classification tasks. The error in predicted cell density was reduced to below 5% across large inpainting regions, marking a notable improvement over previous approaches. In addition the model was evaluated on its ability to handle multiple missing sections at once, which resulted in no performance loss over the single missing section case. The model reliably preserves tissue borders and reconstructs larger structures like blood vessels, which are crucial for accurate cytoarchitectonic mapping.
These findings underscore the potential of generative deep learning models for cytoarchitectonic research, opening new avenues for the automated reconstruction of histological data. Beyond inpainting small regions, our approach paves the way for the reconstruction of entirely missing brain slices, offering a powerful tool for bridging data gaps in high-resolution brain mapping efforts.
p-HCP (Prenatal Human Connectome Patterns) is a high-resolution multimodal MRI dataset of human fetal brain development spanning the second half of gestation. Acquired ex vivo at ultra-high field strength (11.7 T), this dataset includes whole-hemisphere images at 100–200 μm isotropic resolution: anatomical scans, quantitative relaxometry maps, and high-angular-resolution diffusion imaging across multiple b-values. By offering whole-brain, isotropic, multimodal images at an unprecedented resolution, p-HCP will enable new insights into the spatiotemporal dynamics of prenatal brain development. It represents a valuable resource for building mesoscopic brain atlases and advancing our understanding of human neurodevelopment at a stage previously inaccessible to such detailed explorations.
The second half of gestation is a vital period of neurodevelopment involving processes such as neuronal migration, synaptogenesis, and axonal growth. Capturing these transient events in 3D across the whole brain has remained a challenge. While histological methods allow detailed microstructural insight, they typically lack whole-brain volumetric coverage. Conversely, in utero MRI has enabled full-brain 3D imaging but remains limited to macroscopic resolution (~1 mm) due to motion and safety limitations.
This dataset addresses a critical gap in developmental neuroimaging by capturing fetal brain development at a mesoscopic scale, which should help bridge the gap between in vivo fetal imaging and histological microscopy. Our approach leverages ex vivo MRI over extended scan times to acquire comprehensive multimodal 3D imaging at high resolution, allowing detailed visualization and quantitative assessment of developing brain structures. The few existing fetal studies that used ultra-high-field MRI have focused on the first and early second trimesters of gestation, because older brains are typically too large for small-bore preclinical scanners. To overcome this size limitation, we adapted a blockwise acquisition and digital reconstruction method, previously pioneered in an adult brain known as the Chenonceau dataset. The brains were sectioned into blocks, each block was imaged separately in a small-bore 11.7-tesla scanner, and a dedicated semi-automatic image registration and data fusion pipeline was used to reconstruct whole-hemisphere images.
The acquired imaging modalities provide rich information on the tissue composition and microstructure: with quantitative relaxometry, $\text{T}_2^*$ is sensitive to iron content and myelin, $\text{T}_1$ reflects macromolecular environments, and $\text{T}_2$ informs on water content and microstructure. Multi-shell diffusion imaging (b = 1500, 4500, 8000 s/mm²) and high angular resolution (90 directions at b = 8000 s/mm²) will enable detailed tractography and multi-compartment diffusion models. The quantitative multimodality gives access to microstructural modelling, paving the way for a better understanding of the brain tissue composition and neurodevelopmental dynamics occurring during the last two trimesters of gestation.
The initial data release available on the EBRAINS platform features three brains at 18, 27, and 31 post-conceptional weeks, with a complete set of anatomical and relaxometry data, and metrics of the diffusion tensor imaging (DTI) model. Future data releases will include additional specimens covering the developmental timeline, multi-shell models of the diffusion signal, tractography, correlative histological data from the same brains, and segmentations of brain structures, turning this dataset into a full-featured developmental atlas.
Microscopic analysis of cytoarchitecture in the human cerebral cortex is essential for understanding the anatomical basis of brain function. We present CytoNet, a foundation model that encodes high-resolution microscopic image patches into expressive feature representations suitable for whole-brain analysis. CytoNet leverages the spatial relationship between anatomical proximity and microstructural similarity to learn biologically meaningful features using self-supervised learning, without the need for manual annotations. The learned features are consistent across regions and subjects, can be computed at arbitrarily dense sampling locations, and support a wide range of neuroscientific analyses. We demonstrate state-of-the-art performance for brain area classification, cortical layer segmentation, morphological parameter estimation, and unsupervised parcellation. As a foundation model, CytoNet provides a unified representation of cortical microarchitecture and establishes a basis for comprehensive analyses of cytoarchitecture and its relationship to other structural and functional principles at the whole-brain level.
Large-scale scientific imaging datasets -ranging from terabytes to petabytes- are increasingly central to neuroscience and other scientific fields. These datasets require heterogeneous tools for analysis and visualization, which impose conflicting requirements on file formats, metadata schemas, and storage access patterns. Converting between formats or duplicating data is a common workaround, but this introduces inefficiencies, storage overhead, and potential errors in large-volume workflows.
We present tiamat, the Tiled Image Access and Manipulation Toolkit, a flexible and extensible Python framework that facilitates reading, transforming, and exposing large image datasets through a configurable pipeline of readers, transformers, and interfaces.
Tiamat supports on-the-fly transformations such as normalization, axis reordering, and colormapping, while streaming data to diverse endpoints—including Napari, Neuroglancer, OpenSeadragon, Python/Numpy scripts, and FUSE—without requiring intermediate file conversion or duplication. We demonstrate its use within the EBRAINS platform, where tiamat delivers 1µm-resolution histological brain images from the BigBrain dataset directly from high-performance GPFS storage to web-based viewers and analysis clients.
Tiamat decouples data storage from visualization and analysis workflows, enabling modular, reusable, and domain-agnostic image processing pipelines.
Its plugin-based design and compatibility with multiple tools offer a scalable solution for managing large scientific image datasets. Tiamat is implemented in Python, released under the Apache 2.0 license, and deployed via docker. The source code is available here.
The demand for interoperable data processing in neuroscience underscores persistent challenges in cross-border data sharing, secure access to distributed resources, and the portability of tools across organizations. Neuroscience datasets are particularly sensitive due to privacy regulations and their size and diversity, which complicate collaborative research. These requirements motivate the development of approaches that ensure secure access, portability, and reproducible workflows while harmonizing data interoperability across infrastructures. A promising solution lies in the “bring compute-to-data” paradigm, where applications are executed on local infrastructures without moving sensitive data, thereby fostering compliance with jurisdictional and institutional requirements.
Within this context, the BigBrain dataset and related ultra-high-resolution neuroanatomical resources provide a unique test case. These data are of great scientific value but demand specialized processing pipelines and scalable computing environments. European research infrastructures such as EBRAINS have advanced services for data access and HPC, yet integrating analysis workflows seamlessly across borders remains complex. The CBRAIN platform is a federated neuroinformatics environment for distributed computing and data management and offers complementary strengths: a flexible plugin-based tool integration model, user-friendly interfaces, and federated access to heterogeneous compute and storage resources. CBRAIN emphasizes cross-institutional interoperability and secure, auditable execution, making it well-suited for international collaborations around BigBrain.
The presentation will demonstrate the successful adoption and extension of two neuroscience applications on CBRAIN, deployed to resources at the Jülich Supercomputing Centre (JSC): HippUnfold, for automated hippocampal subfield segmentation, and a Cell Detection workflow for imaging analysis.
To enable this, CBRAIN’s security model was extended to support service accounts for automated but controlled access, a critical feature for scaling collaborative analysis. In parallel, deployment of CBRAIN components was customized to align with JSC’s policies and infrastructure requirements. Datalad was central to this integration, providing dataset versioning, provenance tracking, and reproducible data management across CBRAIN and JSC systems (Figure 1). This ensured seamless synchronization of BigBrain-derived datasets and enabled transparent workflows that can be rerun or extended by collaborators.
The results demonstrate the portability of the HippUnfold and Cell Segmentation pipelines, executed efficiently by CBRAIN across borders, with consistent outputs on BigBrain data. These use cases confirm that CBRAIN provides a robust framework for secure data access, tool sharing, and reproducible processing. More broadly, this work illustrates a pathway to harmonize access to computational resources internationally, enabling scalable neuroscience research that leverages high-value datasets such as BigBrain.
Spatial omics technologies enable the study of molecular distribution patterns within the brain, offering critical insights into cellular organization and function. A significant challenge in this field is accurately mapping spatial omics data onto existing 3D brain atlases. Successful mapping would facilitate multimodal atlas creation, allow precise transfer of brain area labels from atlases to spatial omics samples, and thus unify brain area annotations across diverse experiments. Current approaches rely on image registration methods, which performs well for 2D-to-2D or 3D-to-3D alignment. However, spatial omics often necessitates 2D-to-3D alignment, requiring integration of sparse 2D omics slides into comprehensive 3D brain atlases. Additionally, traditional image-based registration methods fail to leverage the rich molecular abundance and distribution data intrinsic to spatial omics samples.
To address these limitations, we propose a novel deep learning-based, feature-driven approach for mapping spatial omics data onto 3D atlases. Our strategy begins by creating robust unimodal embeddings for each data modality. To subsequently map unimodal omics embeddings to histology embeddings, we employ multimodal machine learning techniques based on self-supervised contrastive learning and multimodal optimal transport. We evaluate learning strategies both with and without leveraging spatial location information derived from image-based registration, aiming to anchor spatial omics samples effectively within existing 3D brain atlases.
In this talk, I will provide an overview of our current unimodal and multimodal representation learning frameworks, highlighting recent results from integrating mouse brain histological atlases with spatial transcriptomics (MERFISH) and spatial lipidomics (MALDI-MSI) data. Furthermore, I will discuss how these results can be extended and applied to BigBrain and human brain samples.
Background. The claustrum is a thin, sheet-like grey matter structure nestled between the putamen and insula, wrapped by the capsulae externa and extrema. It is among the most highly connected brain regions, with reciprocal projections spanning the cortical mantle. But claustral function is underinvestigated in living humans, as its complex three-dimensional architecture is poorly understood, and its thinness and proximity to neighboring structures challenge the effective resolution of MRI. Consequently, few in vivo studies exist, and those that do report radically inconsistent characteristics—for example, volume estimates differ by up to fivefold [FIG.1A]— raising concerns about the reliability of findings on connectivity, function, and case-control differences.
Objective. To illuminate the claustrum's three-dimensional anatomy and characterise mapping challenges, our work establishes a multi-scale reference linking micrometre histology to (sub)millimetre MRI, quantifies resolution-dependent distortions and inter-individual variability, and defines the practical limits for reliable in vivo measurement.
Methods. We manually segmented the bilateral claustrum across three scales. First, we derived a continuous three-dimensional "gold-standard" reference from BigBrain (n=1; 100µm isotropic, MNI ICBM-152 space) (Amunts et al. 2013) [FIG.1B]. Second, in ten Julich postmortem brains (5 female; 37–85 years), we mapped the claustrum in native space on every ~60th Merker-stained coronal section (1µm in-plane, ~1.2mm spacing; >400 sections per brain) to validate boundaries and assess population variability relative to BigBrain (Amunts et al. 2020). Finally, we quantitatively compared three 7-Tesla MP2RAGE datasets (n=30; 10 per resolution at 0.5mm, 0.7mm, 1.0mm isotropic) with the BigBrain reference and its resolution-matched downsamplings to benchmark MRI's capacity to resolve claustral morphology.
Results. The BigBrain gold standard provides the first continuous three-dimensional model of the human claustrum from histology. It is broadly consonant with prior two-dimensional histological descriptions but resolves the claustrum in greater detail than recent three-dimensional post-mortem MRI references (Coates and Zaretskaya 2024; Mauri et al. 2024) and whole-brain anatomical atlases (Mai et al. 2015; Ding et al. 2016). Comparison of BigBrain with Julich brains confirmed that while the gold standard is broadly representative, cellular-level resolution suggests direct abutment with the olfactory tubercle, amygdaloid complex (Kedo et al. 2018), and piriform cortex (Kedo et al. 2024) [FIG.1C], with substantial intersubject variability in the ventral claustrum. MRI-to-BigBrain comparisons revealed resolution-dependent distortions that scaled with voxel size (1.0mm > 0.7mm > 0.5mm): mediolateral thickness was inflated, producing paradoxical volume overestimation; anteroposterior length was truncated with anterior portions often missing; and superoinferior extent was underestimated due to largely unresolved ventral "puddles" [FIG.1D].
Discussion. This work resolves a critical methodological bottleneck in claustrum research by providing the first comprehensive validation framework linking histology to MRI. By leveraging BigBrain's unmatched three-dimensional continuity alongside cellular-level validation in the Julich brains, our findings establish minimum resolution requirements and morphological benchmarks for reliable in vivo measurement. In particular, submillimeter resolution at ultra-high field consistently recovers the claustrum’s dorsal core and achieves over 50% spatial agreement with the gold standard, establishing a satisfactory foundation for in vivo studies that may test long-standing hypotheses about claustral connectivity, function, and clinical relevance.
Intracortical microstructure profiling represents a powerful, scalable approach for investigating the laminar organisation of the human cortex on both post-mortem and in-vivo datasets. Building upon a long tradition of histological analysis, this method leverages advances in high-resolution MRI and surface-based sampling to generate quantitative profiles of tissue properties across cortical depths. The present work outlines a standardised workflow for intracortical microstructural profiling that can operate on both 3D post-mortem histology and in-vivo MRI datasets. We demonstrate that the shapes of microstructure profiles are reliable, reproducible across sites and modalities, and robust to variations in data resolution. The workflow can be easily applied to new datasets with the open “microkit”, which is accompanied by comprehensive documentation and a data warehouse (“Microstructural Marketplace”). As the range of applications of microstructure profiling expands across development, aging, and disease, we aim to demonstrate the potential the approach holds bridging microstructural neuroanatomy with systems-level neuroscience.
In 2013, we published BigBrain1, a high-resolution (20μm^3) histological 3D-reconstructed model of the human brain (Amunts et al., 2013). Over the past several years, focus has been directed towards advances in the reconstruction and analysis of BigBrain2 (Mohlberg et al., 2022; Lepage et al., 2023; Lewis et al., 2024; Mohlberg, Lepage et al., 2025).
However, it remains a notable challenge to use established automated brain-imaging pipelines - which have been validated and optimized for typical in-vivo MRI data (e.g., FreeSurfer, ~0.5mm^3-1mm^3) - to process brain volumes reconstructed from histological data at a substantially higher resolution (~100μm^3-200μm^3). Major difficulties include: sectioning artifacts, different tissue contrasts, staining imbalances, and sheer size of data, leading to sub-optimal performances, memory bottlenecks, and prohibitively long processing times.
In this work, we obtain a whole-brain volumetric parcellation of BigBrain2, together with the extraction of cortical surfaces, via an adaptation of FreeSurfer v8.1. Such a segmentation can provide regions of interest for higher-resolution analyses, such as FreeSurfer’s hippocampal subfield or brainstem segmentations (Iglesias et al., 2015), or other external analyses.
Successful cortical surface extraction is contingent upon accurate white / grey matter tissue classification, as well as proper subcortical masking.
Preprocessing:
nnU-Net classified volume: The nnU-Net algorithm (Isensee et al., 2021) was used to obtain a tissue classification for white matter, grey matter, layer-1, and background on the repaired histological sections every 100μm apart, defined from a training set of 77 sections 2mm apart. nnU-Net provides fast robust 2D segmentation insensitive to staining imbalances, unlike global 3D tissue classification. Where a 3D classical classification algorithm would fail, the deep-learning approach was suitably capable of distinguishing layer-1 of the cortex from white matter, despite both tissue classes showing identical cell-body stain intensities. A 3D classified volume at 100μm^3 isotropic resolution was then reconstructed from the aligned resampled 2D segmented images at 20μm^3, in order to serve as the tissue classification in FreeSurfer for the purpose of extracting the cortical surfaces.
Histological volume: To obtain the required standard initial FreeSurfer whole-brain subcortical segmentation (1mm^3), the original histological intensity volume was submitted to mri_synthseg, and the output inserted as needed into the FreeSurfer v8.1 -hires (150μm^3) recon-all pipeline.
Surface extraction:
Interventions to recon-all (including the inputs described above) allowed for the extraction of the cortical surfaces (~167k vertices per hemisphere), as well as produced a final subcortical segmentation, which was refined by the cortical surfaces at the resolution of the input volume.
Fig 1A shows nnU-Net classified volume (4 tissue classes, 100μm^3), with original histological volume for reference (100μm^3).
Fig 1B shows FreeSurfer white and gray surface extractions for BigBrain2.
Fig 1C shows automated FreeSurfer volumetric parcellation output (wmparc.mgz) for BigBrain2 (150μm^3). mri_synthseg - now default in FreeSurfer v8.1 - unavoidably internally downsamples all input to 1mm. Therefore, all subcortical delineations shown are at 1mm^3. Only the cortical white / grey segmentations are refined by the surfaces extracted at 150μm^3.
Fig 1D shows FreeSurfer hippocampal subfield segmentation output (150μm^3 within ROI defined at 1mm^3).
The hippocampus is an extension of the neocortex with highly convoluted folding that varies between individuals. This variability presents a significant shape-fitting challenge that must be addressed for precise inter-individual alignment, parcellation, and detailed mapping of hippocampal structure and function. We previously developed HippUnfold, which uses deep neural networks to segment hippocampal tissue types (e.g., grey matter, its high-myelin laminae, and the hippocampal sulcus) and computationally unfold them into a topologically consistent coordinate space (DeKraker et al., 2022, 2023, 2024; Karat et al., 2023). Multiple pretrained models are available for different imaging modalities (T1w, T2w MRI) and populations (healthy adults, Alzheimer’s disease, neonates), and networks can be customized for specific use cases.
However, segmentation errors can propagate into the unfolding stage, producing distortions or failures. Here, we present HippUnfold v2, a major refactoring of the codebase that substantially improves robustness to imperfect segmentations while preserving subject-specific detail. This is achieved without the use of spatial regularization, smoothing, or interpolation that reduce fine anatomical features. In addition, other key updates include:
1. A modified unfolding algorithm that minimizes distortion between
folded and unfolded spaces,
2. Standard surface tessellations redesigned for more uniform face sizes,
3. A support pipeline for generating study-specific unfolded atlases, and
4. Codebase improvements for easier installation, faster runtime, and simpler BIDS-like output structure.
These advances were evaluated on diverse 3T (n=51) and 7T (n=10) MRI datasets. v2 showed qualitatively improved surface fidelity and quantitatively higher midthickness surface identifiability (greater intra- vs. inter-subject similarity), indicating better preservation of individual shape features. Failure rates from segmentation imperfections were also reduced.
Overall, HippUnfold v2 delivers improved precision in hippocampal shape modelling, greater resilience to imperfect inputs, and a more maintainable and user-friendly pipeline—facilitating detailed and reproducible mapping of this complex brain structure.
DeKraker, J., Cabalo, D. G., Royer, J., Khan, A. R., & Karat, B. (2024). HippoMaps: Multiscale cartography of human hippocampal organization. bioRxiv. https://www.biorxiv.org/content/10.1101/2024.02.23.581734.abstract
DeKraker, J., Haast, R. A. M., Yousif, M. D., Karat, B., Lau, J. C., Köhler, S., & Khan, A. R. (2022). Automated hippocampal unfolding for morphometry and subfield segmentation with HippUnfold. eLife, 11. https://doi.org/10.7554/eLife.77945
DeKraker, J., Palomero-Gallagher, N., Kedo, O., Ladbon-Bernasconi, N., Muenzing, S. E. A., Axer, M., Amunts, K., Khan, A. R., Bernhardt, B. C., & Evans, A. C. (2023). Evaluation of surface-based hippocampal registration using ground-truth subfield definitions. eLife, 12. https://doi.org/10.7554/eLife.88404
Karat, B. G., DeKraker, J., Hussain, U., Köhler, S., & Khan, A. R. (2023). Mapping the macrostructure and microstructure of the in vivo human hippocampus using diffusion MRI. Human Brain Mapping, 44(16), 5485–5503.
Understanding mechanisms of brain function and dysfunction is at the core of the neuroscience mission. However, our grasp of causal relationships between brain properties is hindered by a historical focus on single modalities neglecting the complex interplay between neural scales and features. Progress in neuroinformatics and the increasing availability of open datasets such as BigBrain have helped overcome this limitation by facilitating the contextualization of brain maps against cellular, metabolic, and network properties (Fig1A). Contextualization methods propose that quantifying spatial similarity between brain maps (or brain map correlations) may shed light on pathways of structure-function coupling, development, and disease.
Despite the rapid uptake of these methods, their potential pitfalls have received little attention (Fig1B). First, data contextualization studies often apply series of bivariate correlations to uncover potential relationships between brain maps. In addition to lack of justification for selected brain maps in these exploratory analyses, results are often described using causally ambiguous language that can overstate posited mechanistic relationships. Moreover, data contextualization studies tend to reuse reference datasets built from small and non-representative samples, particularly when these datasets are generated using costly and logistically complex methods. Yet, the generalizability of insights gained from these brain maps is unknown. Regarding data processing, problems with inter-modal and inter-subject alignment can introduce systematic regional bias in data contextualization studies. Together, these challenges can lead to correlational overreach, overfitting, circular reasoning, and limit findings to source data quality.
We propose a roadmap of practical guidelines operating at the level of study design, analysis pipelines, and interpretation of findings to develop best practices in data contextualization (Fig1B). First, researchers should anticipate whether data contextualization is best applied for confirmatory (i.e., hypothesis-driven) or exploratory purposes in their work. This choice should be clearly reported and justified to guide downstream interpretation of results. Second, we encourage frameworks considering different aspects of analytical uncertainty in the data contextualization pipeline, which could include quantitative estimates of co-registration and spatial normalization accuracy as well as regional and inter-individual data homogeneity. Third, correlative studies should ideally be complemented and/or confirmed by paradigms that approach causal inference at the level of study design (e.g., leveraging animal models for electrophysiological stimulation, optogenetic and chemogenetic modulation, or targeted lesions) and analytics (e.g., hierarchical models). Lastly, we advocate for increased data diversity through geographically and clinically broader data collection initiatives. Data augmentation could also leverage synthetic data generated from artificial intelligence techniques when additional data collection is not possible.
A multiscale understanding of neural systems requires analyzing and disentangling their components and interdependencies. While data contextualization has naturally lent itself to this endeavour, neglecting this technique’s intrinsic limitations risks overstating its explanatory power on overarching principles of brain organization. We encourage open discussions in the neuroimaging community to refine data contextualization techniques and their implementation within paradigms better suited to mechanistic investigations of brain organization.
Brain disorders manifest along a spectrum of symptoms that involve disruptions in mood, cognition, or motor function. These symptoms originate from dysfunctions of specific brain circuits and may hence be seen as ‘disorders of the human connectome’, or ‘circuitopathies’
However, exactly which circuits become dysfunctional in which disorder remains elusive. Moreover, it remains unclear which circuits map to which specific symptoms. Invasive and noninvasive brain stimulation methods are applied to focal points in the depth or on the surface of the brain. However, their focal application leads to network effects that are distributed along brain circuits across the entire brain. By nature, applying brain stimulation is a causal intervention that engages specific brain circuits: If an intervention leads to symptom improvements, we may suspect that the modulated circuit was causally involved in these symptoms.
In this talk, I will review the effects of deep and superficial brain stimulation onto the human connectome. We will cover results in diseases ranging from the movement disorders spectrum (Parkinson’s Disease, Dystonia, Essential Tremor) to neuropsychiatric (Tourette’s & Alzheimer’s Disease) and psychiatric (Obsessive Compulsive Disorder, Depression) diseases. I will also demonstrate how findings in seemingly different diseases (such as Parkinson’s Disease and Depression) could be transferred to cross-inform one another and how the same method may be used to study neurocognitive effects, such as risk-taking behavior or impulsivity.
Andreas received an MD from Freiburg University and a PhD from Charité Berlin. He directs the Institute for Network Stimulation at the University Hospital Cologne. He is further affiliated with the Center for Brain Circuit Therapeutics at Mass General Brigham in Boston.
His lab studies how focal neuromodulation impacts the human connectome to refine clinical treatments for neurological and psychiatric disorders. A key question is which networks should be modulated for improvements of specific symptoms – in disorders such as Parkinson’s Disease, Obsessive Compulsive Disorder, Depression, or Alzheimer’s Disease. Further, the lab develops methods to segregate the human connectome into functional domains by combining brain stimulation with functional and diffusion-weighted MRI.
Präsident der Helmholtz-Gemeinschaft Deutscher Forschungszentren"
Abstract—The integration of biophysically grounded neural
simulations with Artificial Intelligence (AI) has the potential to
transform clinical neurodiagnostics by overcoming the inherent
challenges of limited pathological EEG datasets. We present a
novel AI-driven framework that leverages a Distributed-Delay
Neural Mass Model (DD-NMM) to generate synthetic EEG
signals replicating both healthy and pathological brain states.
Through systematic parameter tuning and domain-specific
data augmentation, we enrich the diversity of simulated
signals, enabling robust anomaly detection using machine
learning techniques. Our approach integrates supervised
classification and unsupervised one-class anomaly detection,
achieving over 95% accuracy in synthetic tests and over 89%
when applied to real EEG data from epilepsy patients and
healthy volunteers. By providing an engineered solution that
bridges computational neuroscience with AI, this framework
enhances early seizure detection, adaptive neurofeedback,
and brain-computer interface applications. Our results
demonstrate that theory-driven simulation, combined with
state-of-the-art machine learning, can address critical gaps
in medical AI, significantly advancing clinical neuroengineering.
Clinical relevance— This study provides a scalable and interpretable
AI-driven method for EEG anomaly detection, which
can support clinicians in identifying seizure patterns and other
neurological disorders with high accuracy. The integration of
computational neuroscience with AI-based diagnostics offers
a potential pathway for early intervention and personalized
neurotherapeutic strategies.
Introduction: Deep Brain Stimulation (DBS) is a successful symptom-relieving treatment for Parkinson’s disease (PD). However, the introduction of advanced directional DBS electrodes significantly expands the programming parameter space, rendering the traditional trial-and-error approach for DBS optimization impractical and demonstrating the need for computational tools. Our recently developed DBS model using The Virtual Brain simulation tool was able to reproduce multiple biologically plausible effects of DBS in PD (Meier et al., 2022), though a link with clinical outcome data was still missing.
Methods: In the current work, we extended our virtual DBS model toward higher resolution for the stimulus input, incorporating streamlines around the electrode and the electric field calculation, adapting a previous approach by (An et al., 2022). The region-based whole-brain simulations were set up with The Virtual Brain applying the generic two-dimensional oscillator as neural mass model and an averaged connectome from the Human Connectome Project S1200 release as underlying structural network. We simulated DBS of N=14 PD patients with available empirical data on monopolar ring and directional contact activations of N=392 different electrode settings (in total over all patients, with varying amplitudes between 0 and 3mA) of the ‘SenSight’ electrode with corresponding motor task outcome (Busch et al., 2023). The motor task involved maximum-velocity pronation-supination movements of the lower arm, with the movement velocity recorded using a handlebar-like device held in the patient’s hand. To predict the motor task outcome for each individual setting, we fitted a linear model based on the first three principal components of the N=392 time-averaged DBS-evoked simulated responses.
Results: The whole-brain simulations are now sensitive to the exact three-dimensional location of the activated contact and the tested amplitude. Our prediction model based on the simulated or so-called sweet dynamics demonstrated a correlation between predicted and empirically observed motor task improvements due to DBS of r=0.386 (p<10-4) in a leave-one-setting-out cross-validation (Figure 1A-B). Benchmarking revealed a trend toward better predictions with our sweet dynamics than imaging-based static methods such as the sweet spot (r=0.16, p<0.05) and sweet streamline (r=0.26, p<10-4) approaches (Hollunder et al., 2024). Furthermore, our model outperforms the traditional trial-and-error method in predicting optimal clinical settings for individual patients, e.g. achieving an over 60% likelihood of identifying the optimal contact within the first two suggested contacts compared to a 25% likelihood for the trial-and-error method (Figure 1C).
Conclusions: We identified the sweet dynamics that show improved motor task outcome for individual electrode settings of PD patients. These simulated DBS-evoked responses can be used to find the optimal electrode settings via a novel network-dynamics-based computational method. In the future, the developed framework can be used to optimize the electrode placement and settings in silico in individual patients prospectively. Our study showcases the potential benefit of whole-brain simulations for improving clinical routine.
Background: Postmortem MRI has opened-up avenues to study brain structure at sub-millimeter ultra high-resolution revealing details not possible to observe with in vivo MRI. Here, we present a novel package (purple-mri) which performs segmentation, parcellation and registration of postmortem MRI. Additionally, we provide a framework to perform one-of-its-kind vertex-wise group-level studies linking morphometry/histopathology in common coordinate system for postmortem MRI.
Method: We developed a combined voxel- and surface-based pipeline combining deep learning with classical techniques for topology correction, cortical modeling, inflation, registration for accurate parcellation of postmortem cerebral hemispheres (Fig.1 Khandelwal et al. 2024). Moreover, using the GM/WM segmentations derived from postmortem hemisphere and FreeSurfer-processed antemortem MRI, we perform deformable image registration between the two modalities for each brain specimen. Vertex-wise thickness analysis was performed to assess tau and neuronal loss distribution in corresponding specimens of postmortem (7T at 0.3mm3; N=75) and antemortem (3T at 0.8mm3; N=49) MRI (Table 1) with AD continuum diagnosis. The semi-quantitative average tau and neuronal loss ratings were derived from histopathological examination across the brain. All analyses include age, sex, and postmortem (or antemortem) interval as covariates.
Result: Our method parcellates postmortem brain hemisphere using a variety of brain atlases even in areas with low contrast (anterior/posterior regions), profound imaging artifacts and severe atrophied brains (Fig. 1). Our registration pipeline provides one-to-one correspondence between the two modalities. For thickness/pathology associations, small sparse significant clusters in superior temporal and precuneus in antemortem MRI (N=49) were observed. However, postmortem MRI showed much stronger associations across large clusters in the temporal, entorhinal cortex, and cingulate for both the matched cases (N=49) and the full cohort (N=75), regions implicated in ADRD.
Conclusion: Purple-mri paves the way for large-scale postmortem image analysis. Stronger associations between thickness and average tau burden/neuronal loss than antemortem MRI shows that our pipeline (purple-mri) could inform the development of more precise and sensitive invivo biomarkers by mapping information from postmortem to antemortem MRI in a common reference coordinate framework just as is the norm for antemortem studies.
Introduction: Accurate co-registration of high-resolution histology data to multimodal MRI provides complementary benefits for validation of imaging biomarkers from healthy brain and its alterations. While BigBrain [1] and Julich-Brain atlas [2] provide multi-level probability maps for cell distribution and morphology, BigMac [3] extends these efforts to co-registration of multi-contrast microscopy to 7-T MRI. In this work, we report the development of a customized pipeline of BigBrain2 [4], applied to a rat model of traumatic brain injury (TBI), for co-registration of high-resolution, multiple contrast microscopy cut from coronal or horizontal sectional planes to the anatomical and diffusion MRI at various resolutions.
Methods: Our semi-automated pipeline for histology-MRI co-registration and volumetric reconstruction includes (a) automated, section-to-section alignment at cellular resolution; (b) affine registration to ex vivo structural and diffusion-weighted MRI maps; (c) iterated 2D and 3D linear and nonlinear transformations between stacked histology and reference MRI to account for translation, rotation, scaling, and shearing; and (d) optical balancing of the reconstructed histology volumes.
We tested this pipeline on the left hemisphere of four rats – one each of naïve, sham-operated, mild TBI (mTBI), and moderate TBI (moTBI) animals - from a larger dataset introduced in Table 1. Details of surgical procedures, lateral fluid percussion, and tissue processing are presented in [5]. We used the 11.7-T ex vivo MRI with T1-w and T2-w sequences (70-100 µm isotropic) and orientationally averaged diffusion image (150-µm isotropic) as the reference volume. We processed the Nissl- and myelin-stained sections to assess the cyto- and myeloarchitectonics. The stained sections were scanned at 136.9 nm/pixel in-plane, quality controlled, and downsampled to 10.95 µm. Histology photomicrographs and MRI images were masked, and the MRI volumes were re-oriented along the stacking axis of the corresponding histological object.
Results: Our customized pipeline was successful in volumetric reconstruction of Nissl- and myelin-stained histology at 10.95 µm in-plane resolution from anatomical and diffusion reference 11.7-T MRI volumes in both coronal and horizontal cutting planes. We ran experiments with section-to-section co-registration at anatomical extremes to evaluate the orientation of misaligned and broken histological pieces. Optical intensity balancing was also able to resolve staining imbalances.
Conclusions: The developed pipeline has the potential for facilitating multimodal data integration in preclinical and clinical studies. The ongoing work includes extracting anatomical landmarks from MRI and histological blocks for quantitative evaluation of linear and nonlinear transformations and section-to-section registrations.
[1] Amunts K et al. BigBrain: an ultrahigh-resolution 3D human brain model. Science. 2013 Jun 21;340(6139):1472-5.
[2] Amunts K et al. Julich-Brain: A 3D probabilistic atlas of the human brain’s cytoarchitecture. Science. 2020 Aug 21;369(6506):988-92.
[3] Howard AF et al. An open resource combining multi-contrast MRI and microscopy in the macaque brain. Nature communications. 2023 Jul 19;14(1):4320.
[4] Lepage C et al. 3D reconstruction of BigBrain2: Progress report on semi-automated repairs of histological sections. 8th BigBrain Workshop 2024.
[5] Molina IS et al. In vivo diffusion tensor imaging in acute and subacute phases of mild traumatic brain injury in rats. Eneuro. 2020 May 1;7(3).
Understanding the emergence of cognitive operations from the brain's topographical organization is a fundamental goal in neuroscience. However, the roles and interactions of functional, structural and chemical brain features in shaping cognitive structure have remained poorly characterized. This study aims to investigate these multimodal contributions to cognitive structure from a spatial patterning perspective.
We used a comprehensive set of 48 brain maps from Neuromaps, encompassing functional MRI, structural MRI, PET and ASL. The data were collected from independent laboratories. To assess cognitive structure, we focused on CogPC1, a derivative component from Neurosynth, which represents the primary axis of variance in functional cognition. To examine the relationships between brain multimodal features and CogPC1, we conducted two analyses. First, we created a correlation matrix to identify and rank brain features, capturing linear associations. Second, we developed machine learning models to predict CogPC1 to explore more complex patterns, including non-linear relationships and interactions among brain features. We created a general model using all modalities, along with four additional models, each based on a single modality. To ensure robust results in each model, we applied a five-fold cross-validation approach, and for explainability in the model, we calculated the Shapley additive explanations, a technique that utilizes game theory to determine the contribution of each variable to individual model output.
The correlation analysis of the brain maps revealed a strong negative correlation between CogPC1 and Functional Connectivity (FC) gradient 1 (r= -0.66), followed by a positive correlation with the norepinephrine transporter (r= 0.50) (Fig.1). Additionally, there was a negative correlation with sensory association areas (r= -0.49) and another negative correlation with FC gradient 7 (r= -0.36) (Fig.1). Among the three structural maps, the evolutionary cortical expansion map showed the highest correlation (r= -0.24) (Fig.1). For the ML models, the general model outperforms unimodal models, explaining over 80% of the variance in CogPC1 (Fig. 2a). In the general model, we found that functional connectivity in gradients 1, 7, and 6 had the highest contributions to predicting CogPC1, all exerting negative influences (Fig. 2b). Among the neurotransmitter modalities, the norepinephrine transporter was the top contributor, showing a positive influence on CogPC1. A clear interaction effect with gradient 1 is visible on gradients 7 and 6 and norepinephrine transporter (Fig. 2c-f). The contribution of these features to CogPC1 varies according to the gradient 1 spectrum, affecting the slope, direction and magnitude of their relationship with CogPC1.
Our results reveal that functional connectivity gradients and neurotransmitter density maps of receptors and transporters are key predictors of cognitive structure. Gradient 1, in particular, plays a crucial role in interacting with other brain features, suggesting that it encodes the operational regime of other brain features. This study highlights the importance of multimodal integration in understanding cognitive structure and provides insights into the complex interactions between different brain features. These insights could pave the way for personalized medicine, offering more precise brain-based assessments and individualized treatments for cognitive and neurological disorders.
Introduction
Understanding links between (clinical) measures of brain function and their underlying molecular, synaptic constraints is essential for developing and utilizing personalised interventions. We developed a flexible approach to integrate multi-modal datasets of different spatial scales and test hypotheses on how micro-/mesoscale properties shape macroscale brain dynamics. Here, we used intracranial electro-physiological data (normative iEEG atlas, Montreal), microscale synaptic data (neurotransmitter receptor density maps, Juelich) and a hierarchical Bayesian framework (dynamic causal modelling (DCM), University College London) to explain how neuroreceptor topography shapes cortical iEEG.
Methods
This study comprises three steps. First, we employed canonical microcircuit models (CMC) to generate iEEG spectra and obtain baseline fits. Subsequently, we investigated how regional receptor compositions (‘fingerprints’) explain variations in cortical electrophysiology; we leveraged a flexible parametric empirical Bayesian hierarchical approach to do this. The purpose of this step was to demonstrate how multi-modal data can be integrated to evaluate hypotheses and how generative models of neural populations can be enhanced by neurobiological priors. We obtained model evidence, which was used to determine a winning model, and neurobiologically-informed connectivity parameters. Eventually, normative parameters from the winning model were used in a worked-example of mismatch negativity to demonstrate how the derived parameter posteriors can serve as priors to facilitate optimisation and hypotheses testing.
Results
Baseline fit: canonical microcircuit models generated ongoing awake cross spectral densities of iEEG signals (1770 data series) accurately; with 40 exceptions (≅ 2.3% had a mean squared error (MSE) > .05), the CMCs were able to generate key components of regional cortical signal variability.
Neuroreceptor-informed models: comparison of 21 candidate models – combinations of neuroreceptor densities and principal component informed models – led to a winning model with significantly improved model evidence compared to baseline. Thus, regional receptor composition variability explains regional variation of iEEG spectra, i.e., we could obtain improved model evidence across regions, only 1/1770 had a MSE > .5. Further, the contribution to model evidence improvements of types of receptors are shown.
Worked example: derived receptor-informed parameter priors for population connection strengths were informative for modelling mismatch negativity and lead to significantly higher model evidence and fit, and improved parameter posteriors.
Contribution
Our work contributes to bridging this explanatory gap using generative neural population models. We demonstrate an approach to integrate multimodal datasets and derive a normative cortical atlas of parameters. Additionally, we show that regional oscillatory activity measured with physiological intracranial EEG is shaped and best explained by interactions of 15 neurotransmitter receptor systems and not only by GABAergic and glutamatergic neurotransmission.
Both, the method and the derived data can be used by other researchers to either integrate other (types) of data into a principled framework or use the derived normative parameter priors for the CMC neural population model to inform there own investigations.
Outlook
This approach has wide-ranging applicability in neuroscience research; especially, since the capacity to evaluate brain (dys)function with complementary modalities at varying spatial scales increases dramatically.
Introduction
Long-range temporal correlations (LRTC) are a ubiquitous property of healthy brain activity and, under the Critical Brain Hypothesis, reflect proximity to critical states. In line with this, these scale-free dynamics diminish during loss of consciousness - signaling a departure from criticality - and rebound upon recovery. Yet their implications for individual-level metabolic regulation remain unclear. Here, we relate LRTC in resting-state fMRI, quantified by the Hurst exponent, to glucose metabolism measured with [18F]FDG PET.
Methods
We analyzed a multimodal dataset comprising resting-state fMRI and dynamic [18F]FDG PET from 43 healthy adults. The Hurst exponent was estimated via a wavelet-based approach using the Schaefer 400 7 Networks atlas augmented with the Tian subcortical atlas (S1). Dynamic [18F]FDG PET was fitted with the two-compartment Sokoloff model to obtain microparameters K1 (tracer inflow), k2 (tracer efflux), and k3 (phosphorylation by hexokinase). The macroparameter Ki (net uptake) captured overall kinetics (Figure 1 panel a). We correlated the Hurst exponent with [18F]FDG parameters at group and individual levels, assessing significance with spatial-autocorrelation-preserving null models (Figure panels b-c). Interindividual variability was tested by regressing mean Hurst exponents against [18F]FDG parameters while controlling for age, sex, and head motion (Figure panel d). We further extended analyses to two PET tracers: [11C]UCB-J (synaptic density) and L-[1-11C]Leucine (cerebral protein synthesis rate, rCPS). The hurst exponent map was regressed onto these biological properties. Predictor importance was assessed by dominance analysis, and model performance was evaluated with distance-dependent cross-validation (Figure panel e).
Results
Across subjects and regions, Hurst exponents were consistently >0.5, indicating persistent dynamics. The Hurst exponent showed a strong positive association with Ki (r = 0.65, pSMASH < 0.0001). Among microparameters, k3 displayed a robust linear relationship with Hurst values (r = 0.44, pSMASH < 0.0001), replicating at the single-subject level. At the interindividual level, higher mean Hurst exponents were associated with faster [18F]FDG phosphorylation (k3), indicating that individuals with stronger LRTC exhibit faster hexokinase-mediated phosphorylation (model R2 = 0.20; β = 0.32, t = 3.28, p = 0.002). In the multilinear model, the weighted combination of Ki, synaptic density, and rCPS explained a substantial fraction of LRTC variance (67%, pSMASH < 0.0001). Dominance analysis identified rCPS as the strongest predictor of the Hurst exponent (general dominance: 58%), followed by glucose metabolism (25%). Distance-dependent cross-validation indicated good generalizability.
Discussion
LRTC are pervasive in spontaneous brain activity and couple systematically with metabolism across regions and individuals. At the individual level, stronger LRTC corresponded to faster [18F]FDG phosphorylation, linking intrinsic neural dynamics to metabolic utilization. Beyond glucose, synaptic density and protein synthesis also contributed, suggesting that sustaining temporally persistent dynamics requires energy as well as ongoing molecular remodeling. Overall, scale-free fluctuations - while supporting efficient information processing - carry a substantial metabolic cost.
Comparing whole-brain dynamics across individuals and modalities remains challenging, particularly when seeking interpretable, cohort-scale metrics that respect the discrete, metastable nature of brain activity. We introduce a two-part framework that combines energy landscape analysis with a multi-subject dimensionality reduction stage to yield bias-minimal, scale-invariant, and modality-agnostic characterisations of brain dynamics from binarised time series.
First, we fit a pairwise maximum-entropy model to each subject’s activity, using exact, pseudo-likelihood, or variational-Bayes estimators depending on system size. The model matches first and second moments only, avoiding strong distributional assumptions and remaining largely universal across any neuroimaging modality that can be meaningfully binarised. From the fitted parameters we construct basin graphs and disconnectivity graphs, derive Metropolis transition kernels, and compute a kinetic fingerprint comprising basin occupancies, barrier heights, dwell-time distributions, relaxation spectra, committors, and mean first-passage times. Two complementary accuracy indices quantify information gain over an independent baseline. Robustness is enforced by bootstrap confidence intervals, data-driven null thresholds for moment residuals, and quality control that verifies consistency between empirical and modelled moments.
Second, we embed subjects into a common feature space using a compact vector formed from the above geometric and kinetic descriptors. This multi-subject reduction preserves the most informative structure in each time series while enabling direct, universal comparisons across individuals. The representation supports unsupervised clustering, hypothesis testing, and cross-dataset transfer without re-engineering features for each modality.
Applied to functional ultrasound recordings from multiple mouse lines modelling distinct autism spectrum disorder subtypes, the joint pipeline yields statistically significant, stable clustering of the subtypes. Group differences are interpretable at the level of landscape geometry and kinetics, and the framework flags candidate regions with atypical dynamics for targeted follow-up. Because outputs are low-dimensional and atlas-ready, they can be aligned with high-resolution cytoarchitectonic and receptor maps to probe structure-function coupling across scales, and are directly applicable to neurodevelopmental and neurodegenerative cohorts as well as to cognitive and sensory paradigms.
This work provides a practical route to cohort-level, interpretable phenotyping of whole-brain dynamics. By unifying subject-specific energy landscapes with a common, population-level embedding, it offers a principled way to compare individuals in a single functional coordinate system and to integrate these dynamics with detailed anatomical resources. An accompanying repository and preprint will provide implementation details and extended results.
We present a scalable computational framework for simulating brain dynamics within structurally complex regions such as the hippocampus, integrating high-resolution multimodal data—including BigBrain-derived surface meshes and diffusion MRI tractography—into The Virtual Brain simulator. This Region Brain Network Model (RBNM) framework enables vertex-level placement of neural mass models (NMMs) informed by region-specific connectivity, anatomical subfields, and morphological descriptors. Our layered architecture supports biologically grounded simulations of EEG, MEG, and BOLD signals, validated against empirical recordings across frequency bands. Compared to standard whole-brain simulators, RBNM enhances anatomical precision, simulation fidelity, and regional interpretability. We benchmark the framework using hippocampal data and achieve high spectral correlation with intracranial EEG (iEEG). The code and data pipelines are openly available to support reproducibility and adoption in the neuroscience community.