- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The NEST Initiative is excited to invite everyone interested in Neural Simulation Technology and the NEST Simulator to the virtual NEST Conference 2024. The NEST Conference provides an opportunity for the NEST Community to meet, exchange success stories, swap advice, learn about current developments in and around NEST spiking network simulation and its application. We particularly encourage young scientists to participate in the conference!
The NEST Conference 2024 will again be held as a virtual event on
Monday/Tuesday 17/18 June.
The detailed program is available here: https://events.hifis.net/event/1168/timetable/#20240618.detailed
Tadashi Yamazaki, University of Electro-Communications, Tokyo
Behnam Ghazinouri, Ruhr University Bochum
Benedetta Gambosi, Politecnico di Milano
Tobias Gemmeke, RWTH Aachen University
NEST is the one of the most standard and widely used simulation environment. It is the first-class simulator in our project endorsed by Program for Promoting Researches on the Supercomputer Fugaku [1], where we develop a spiking network model of the mouse brain in NEST on Fugaku, and connect it to a musculoskeletal body model running on a local PC to realize a closed-loop brain-body simulation over the Internet while passing firewalls. Moreover, we include another brain model written in C++ with CUDA on a local GPU machine into the closed loop. To realize this, we needed a way to connect multiple models implemented in multiple simulators running on multiple computers across multiple organizations. We solved this by using Robot Operating System (ROS) [2], which is a de-facto standard communication framework used in the field of robotics, and rosbridge, which encapsulates ROS messages with JSON and transfers via websocket to pass firewalls. Specifically, we developed a C++ library for rosbridge that allows simulators written in C/C++ to communicate over rosbridge. By using those technologies, we were able to realize a closed-loop simulation among a cortico-basal ganglia-thalamus model in NEST on Fugaku, a mouse body model in Gazebo on a local PC, and a cerebellar model in C++ with CUDA on another local GPU machine [3]. These results suggest that ROS and rosbridge can provide more flexibility to and enhance the versatility of NEST.
The claustrum, a structure having extensive connectivity with the rest of the brain and being involved in many high-cognitive processes, is still one of the least understood parts of the mammalian nervous system. One of the reasons is its complex location and geometry: a folded, thin layer of neurons, sandwiched between other cellular groups and white matter tracts, which creates specific challenges for experimentation. However, in recent years the claustrum has been studied intensely in mice, revealing many details about its cellular composition and dynamics, but still without a satisfactory mechanistic explanation of its function.
This work investigates through computational simulations the dynamics of the interaction between the claustrum and the cortex. To this end, we built a bottom-up, mesoscale in-silico model of the mouse claustrum that we reciprocally connected with a simplified model of the cortex. Specifically, we used NEST and NESTML to create AEIF neurons (Brette and Gerstner, 2005) for the claustrum and Wang-Buzsaki cortical neurons with difference-of-exponentials time-course synaptic conductances (Wang and Buzsaki, 1996; Palmigiano et al, 2017). From this work in progress we will present how we reached with NEST to the sets of parameters that replicate the responses of claustrum neurons in vitro, their arrangement in space and their measured connectivity (Kim et al, 2016; White and Mathur 2018; Graf et al, 2020). The replication of the Palmigiano et al. 2017 network allowed the production of a complex, cortical-like signal. Furthermore, we will present preliminary results of the interaction between the claustrum and the cortex.
The 70th anniversary of the first simulation of a spiking neuronal network by Farley and Clark (Proceedings of the 1954 Symposium on Information Theory, Institute of Radio Engineers) provides a good opportunity to take stock of the development of spiking network simulations over the past seven decades, chart present practices in the field and develop perspectives for the future scientific practice in the field. Current practice shows an interesting division: Modeling of networks of morphologically detailed cells appears to be almost entirely based on the NEURON simulator with ModelDB as the central, neuroscientifically curated and actively used model repository. In contrast, for networks of point neurons, in spite of some integrative efforts (eg Brette et al, 2007), a wide range of simulation tools exist and furthermore, many scientific publications are still based on hand-crafted code in generic programming languages, with models often at best shared via author-maintained Github repositories. Computational neuroscience of spiking network models is furthermore dominated by studies based on small networks of a few thousand neurons, even though data-driven models of millions of neurons are publically available and simulation technology has been capable to simulate networks of billions of neurons for a decade. I would like to stimulate a discussion on these observations, their causes, consequences and—if deemed necessary and desirable—steps towards a better computational neuroscience practice.
NEST simulations are typically executed by a script that configures and runs the simulation. Despite recent improvements in NEST 3.x, where file headers specify the detailed origin of the outputs, users still must interpret the data with respect to the simulation setup. This information is difficult to convey, especially in collaborative contexts with shared simulation results. Moreover, during the explorative process of scientific discovery, results may change without warning when details of the simulation are changed, which could lead to wrong interpretations by collaborators who are unaware of such changes. Therefore, we face two challenges: results are stored in data objects without metadata that describe their role in the simulation, and the simulation outputs are not linked to a description of their provenance with respect to the simulation building.
Here we present concepts to tackle both challenges when using the NEST Python interface. We consider a typical simulation experiment and subsequent data analysis using the Elephant (doi:10.5281/zenodo.1186602; RRID:SCR_003833) toolbox [1]. First, we show how data from a NEST simulation can be represented with data objects annotated with simulation details using the Neo library [2]. Second, we demonstrate how the software Alpaca (doi:10.5281/zenodo.10276510; RRID:SCR_023739) can capture workflow provenance when running a simulation (see Figure) [3]. The two approaches allow the semantic description of the simulation experiment that contributes to the FAIR principles [4] by improving the findability of results through detailed provenance, supporting interoperability through a standardized data model, and promoting reuse of simulation data through enhanced data description.
The development and function of the cerebral cortex of the mammalian brain is a complex orchestration of cellular dynamics leading to a highly specialised structure. The present study explores the development of a gene regulatory network that abstracts the underlying DNA and genetic expression responsible for this anatomical process. An agent-based model is created in the high performance software, BioDynaMo, to model the 3D spatial formation of the neocortex. A laminated structure of neuronal cell bodies is produced through stochastic cell fate determination and cell numbers are verified. Multicompartmental neurons are grown using local guidance cues to generate realistic circuit morphologies. Entire cortical columns are simulated with the potential for multicolumn connectivity analysis.
When stimulated in NEST, these spatially informed circuits are found to produce homeostatic network dynamics through realistic afferent connectivity and input regimes. Synaptic weights are updated through a BCM based approach to produce networks with realistic cortical activity. This modelling approach allows investigations into the effects of each stage of development and the emergence of functional circuitry in the cortex. Initial analysis is carried out on network motifs and encoding of synthetic stimuli showcasing emergent computational units. The networks grown mimic canonical microcircuit connectivity. A full study is underway to analyse the emergent functional circuits that can be grown in this realistic corticogenesis simulation. The model is set up to validate a set of hypotheses regarding emergent circuitry, electrophysiology and also the effect of activity during development in the cortex.
NEST Desktop is a web-based GUI application for NEST Simulator [1, 2]. It has been established as an useful tool to guide students and newcomers to learn the concept of computational neuroscience exploring the behavior of neuron models or network dynamics.
The latest release (v3.3) provides the more models, e.g. multi-compartmental models or even synaptic models for plasticity (STDP, Tsodyks). These virtual experiments can be performed on local machine, on JSC as Jupyter proxy extension or on public infrastructure on EBRAINS [3]. Furthermore, the app is collaborated with various projects such as Insite (activity during live simulation) [4] NeuroRoboticPlatform (NRP) [5] and ViSimpl (a visualization application) [6].
I will talk about the current development of NEST Desktop (v4.0), especially about the integration plan of human multi-area cortex models (HuMAM) [7] in NEST Desktop where the user can simulate and large-scale network dynamics of various human brain areas with NEST and analyzed it with Elephant [8]. For this purpose, a hierarchical network structure is embodied in NEST Desktop.
In general, NEST Desktop is re-written in a new framework and is designed as a plugin-based architecture. With this concept other spiking-network simulation tools, e.g. Norse [9], PyNN [10], can be used as plugin for the front-end with corresponding back-end.
A model for NMDA-receptor-mediated synaptic currents generating persistent activity proposed by Wang and Brunel [1-3] has been widely adopted in computational neuroscience, both for spiking-neuron and mean-field models [1-4]. The model describes synaptic dynamics by a phenomenological two-dimensional nonlinear ODE system for the gating variable S(t). Due to the nonlinearity, the pre-synaptic gating variables of a post-synaptic neuron cannot be simulated in aggregated form. Numerically efficient solutions are only feasible for fully connected networks with identical, short delays (see e.g. [5]).
We derive a linear approximation to Wang’s model which allows us to integrate all NMDA input currents to a neuron in aggregate form as for linear synapses. Using a reference implementation in NEST, we show that the approximation is accurate and that a network model based on the approximation shows the same decision making dynamics as one using Wang’s original model. For an example network with around 8000 neurons, the approximation is about 30 times faster, and scales sublinearly with the number of synapses.
Exploiting the flexibility and performance gained through the approximation, we investigate the dynamics of a binary decision-making network with sparse connectivity and randomized delays.
We developed a biologically grounded model of the human hippocampus CA1 region. The model includes all the pyramidal cells and interneurons with a realistic number of cells (about 5 millions) and their connections (31 billions). The connectivity matrix was generated using previously published methods [1] and stored in the SONATA data format. The model is set to run in the NEST simulator using a previously published adaptive leak integrate and fire neuron model [2] able to faithfully reproduce the spike trains observed in vitro and in vivo. Here, we present the NEST implementation choices that we had to take to run our model on the supercomputer Galileo100 at CINECA facility in Italy.
Research on parkinsonism underscored the central roles of basal ganglia alterations and dopamine level reductions in symptom emergence [7]. Recent studies, however, hint at cerebellar involvement in altered parkinsonian brain activity [8, 2]. To unravel the role of this region in parkinsonism, we developed an innovative multiscale, multiarea brain model, aiming to investigate neural dynamics in both healthy and parkinsonian states. This model integrates microcircuits of the BG [4] and cerebellum [3], employing spiking neural networks (SNN), which also simulate dopamine depletion mechanisms, and includes a three-equation mass model of the cortex, thalamus and reticular nucleus, reproducing the loops these areas engage in [9]. After validation with respect to the stand-alone SNN circuits, [3, 4], we tested the model first in a generic motor state. We found that the resemblance between our simulations and experimental data [5, 6], as indicated by matching population firing rates and enhanced beta band oscillations, was notably more pronounced when dopamine-depletion effects occurred in both the cerebellum and BG (compared to BG alone), emphasizing a more direct involvement of the cerebellum in parkinsonism. Lastly, we simulated a behavior protocol, eyeblink classical conditioning, incorporating plastic mechanisms at the cerebellar level [1] under both physiological and pathological conditions. Results indicate that despite altered cellular structure, the cerebellum exhibits adaptive capabilities, albeit with reduced effectiveness compared to the physiological state. In summary, our findings stress the significance of recognizing the cerebellum’s role in parkinsonism to fully grasp the intricate neural mechanisms underlying the related disorders.
After many years of research in neuroscience, the manifestation of intelligence in spiking neurons remains a puzzle. Computational neuroscience has the potential to uncover the underlying principles but is burdened by the computational complexity of executing biological neural network simulations of sufficient size and realism.
Varying efforts have addressed the challenge of acceleration applying software as well as hardware design optimizations. Thereby a classic gap opens between the flexibility and usability of a platform versus the performance it provides.
In 2018, we had the opportunity to take a fresh look at the challenge at hand assessing existing system solutions as well as gained knowledge in the domain of neuroscience. It quickly became clear, that any platform to be successful has to combine all three aspects: the usability for neuroscientist, flexibility to accommodate new models and insights as well as sufficient computing performance to handle reasonably large networks as well as to provide insights into slow plasticity processes.
In this talk, I will review key requirements put on such a system from a neuroscience but also from an engineering perspective. Thereby a focus is put on the three key bottlenecks: communication, data access as well as numeric updates of the model equations. Next, follows a discussion on the potential of existing architectural concepts to address these. Finally, the real-world neuroAIx FPGA-cluster with its 20x speed-up is juxtaposed to its blue-print promising a 100x acceleration.
NESTML is a domain-specific modeling language for neuron models and synaptic plasticity rules [1]. It is designed to support researchers in computational neuroscience by allowing them to specify models in a precise and intuitive way. These models can subsequently be used in dynamical simulations of small or large-scale spiking neural networks, using high-performance simulation code generated by the NESTML toolchain. The code extends a simulation platform (such as NEST [2]) with new and easy-to-specify neuron and synapse models, formulated in NESTML.
NESTML was originally developed for NEST simulations to be executed on CPUs; here we extend it with support for GPU-based simulation for the NEST-GPU target platform [3]. We demonstrate our approach with code generation for a balanced random network of integrate-and-fire neurons with postsynaptic currents in the form of decaying exponential kernels. The dynamics of the network are solved using exact integration [4]. We validate the results through a comparison of statistical properties of the network dynamics against those obtained from NEST running on CPUs as a reference.
By virtue of using NESTML, neuroscientists have to write the model code only once and have the same models run on multiple target platforms without having to write any additional code. This alleviates the need for neuroscientists to write CUDA/C++ code for GPUs, with its attendant pitfalls. This improves model reuse and scientific reproducibility.
NEST, a distributed neural network simulator, is capable of simulating large, sparsely interconnected networks, wherein axons and dendrites are represented as simple transmission delays. The synapses in these networks can incorporate plasticity mechanisms, including the widely used spike-timing dependent plasticity (STDP). Presently, NEST employs purely dendritic delays, which are suitable for networks up to approximately one cubic millimeter. To accommodate larger networks and enhance the accuracy of STDP weight dynamics, the specification of both dendritic and axonal delays is crucial. However, the introduction of axonal delays presents a causality dilemma, as pre-synaptic spikes may be processed internally prior to reaching the synapse, necessitating knowledge of future post-synaptic spikes. Several strategies to circumvent this issue are explored, with one requiring minimal alterations to the existing code and others involving a comprehensive overhaul of low-level simulator code for a cleaner solution. The most promising strategies are assessed using performance and memory benchmarks.
We will present Arbor, a multicompartmental simulation library, that complements NEST, TVB, and nanoscale simulations, as well as offering interfaces to these tools. Arbor has been designed to leverage modern hardware, including GPUs, while delivering an intuitive interface to neuroscientists that is isolated from the concrete, low-level details. It has been shown to deliver performance up to the full scale of JUWELS booster.
In this tutorial, we will show how to use Arbor starting from a simple ring network in NEST that is transformed, step by step, into a morphologically detailled model. Participants will be given ample chance to interact the models.
Microcircuits are the building blocks of the neocortex [1]. Single instances have been reconstructed experimentally (e.g., [2]), and their general dynamics and information processing capabilities have been investigated theoretically (e.g., [3,4]). Their connectivity is usually represented in connectivity maps consisting of probabilities that neurons establish connections. These maps reduce the complicated circuitry to simple relations between cell types, allowing for efficient instantiations of neural network models in parallel computers [5]. While higher-order features like connectivity motifs are neglected, they enable the discovery of how the underlying structural principles of local circuits are linked to their dynamics.
Recent years have seen significant advances in the application of electron microscopy (EM) for the reconstruction of local cortical networks through leveraging novel machine learning techniques ([6, 7]). These data allow for a more precise look into the architecture of local cortical circuits than was previously possible.
Here, we construct a layer-resolved, population-based connectivity map from a $1\:\mathrm{mm}^{3}$ EM reconstruction of mouse visual cortex [6]. We compare the obtained microcircuit connectivity based on EM data with a corresponding representation derived from light microscopy (LM) data [2]. The connectivity maps exhibit qualitative differences, e.g., in termination patterns of inter-laminar projections. Additionally, we find that the length scale of connectivity is consistently overestimated when using morphology-based approaches compared to the actual connectivity available from EM data. Finally, we simulate spiking neural networks constrained by the derived microcircuit architectures with NEST [8], investigating the extent to which simulated spiking activity is consistent with experimentally observed neural firing.
There is mounting experimental evidence that brain-state specific neural mechanisms supported by connectomic architectures serve to combine past and contextual knowledge with current, incoming flow of evidence (e.g. from sensory systems). Such mechanisms are distributed across multiple spatial and temporal scales and require dedicated support at the levels of individual neurons and synapses. A prominent feature in the neocortex is the structure of large, deep pyramidal neurons which show a peculiar separation between an apical dendritic compartment and a basal dentritic/peri-somatic compartment, with distinctive patterns of incoming connections and brain-state specific activation mechanisms, namely apical-amplification, -isolation and -drive associated to the wakefulness, deeper NREM sleep stages and REM sleep. The cognitive roles of apical mechanisms have been demonstrated in behaving animals. In contrast, classical models of learning spiking networks are based on single compartment neurons that miss the description of mechanisms combining apical and basal/somatic information. This work leverages the NEST multi-compartment modelling framework aiming to provide the NEST community with a simplified neuronal model (Ca-AdEx) that captures brain-state specific apical-amplification, -isolation and -drive through the integration of calcium dynamics in a distal compartment. The proposed neuronal model is essential for supporting brain-state specific features in NEST learning networks at minimal computational cost in the case of two-compartment Ca-AdEx usage. A machine learning algorithm, constrained by a set of fitness functions, selected the parameters defining neurons expressing the desired apical mechanisms. Furthermore, we identified a piece-wise linear transfer function (ThetaPlanes) to be used in large scale bio-inspired artificial intelligence systems.
To survive in a changing world, animals often need to suppress an obsolete behavior and acquire a new behavior. This process is known as reversal learning (RL). The neural mechanisms underlying RL in spatial navigation has received limited attention and it remains unclear what neural mechanisms maintain behavioral flexibility.
We extended an existing closed-loop simulator of spatial navigation and learning, based on spiking neural networks (Ghazinouri et al. 2023). The activity of place cells and boundary cells were fed as inputs to 40 action selection neurons that each represents one direction of movement. The activity of these neurons drove the movement of the agent. When the agent reached the goal, behavior was reinforced with spike-timing-dependent plasticity (STDP) coupled with an eligibility trace which marks synaptic connections for future reward-based updates. The modeled RL task had an A-B-A design, where the goal was switched between two locations A and B every 10 trials.
Agents using symmetric STDP excel initially on finding goal A, but fail to find goal B after the goal switch, persevering on goal A. Injecting short pulses noise to action neurons, using asymmetric STDP, and using small place field sizes was effective in driving spatial exploration in the absence of rewards, which ultimately led to finding goal B and, hence, reversal learning. However, this flexibility comes at the price of lower performance. Our work shows three examples of neural mechanisms that achieve flexibility at the behavioral level, whose differences can be understood in the terms of attractor dynamics.