- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The NEST Initiative is excited to invite everyone interested in Neural Simulation Technology and the NEST Simulator to the virtual NEST Conference 2022. The NEST Conference provides an opportunity for the NEST Community to meet, exchange success stories, swap advice, learn about current developments in and around NEST spiking network simulation and its application. We particularly encourage young scientists to participate in the conference!
The NEST Conference 2022 will again be held as a virtual event on
Thursday/Friday 23/24 June
Ján Antolík - Simulation of optogenetic based visual prosthetic stimulation in V1
Anton Arkhipov - Bio-realistic models of cortical circuits as a freely shared platform for discovery
Benedetta Gambosi - A spiking neural network control system for the investigation of sensori-motor protocols in neurorobotic simulations.
Julian Göltz & Laura Kriener - Fast and energy-efficient neuromorphic deep learning with first-spike times
Anna Levina - Balance and adaptation in neuronal systems
Sacha J. van Albada - Unified Descriptions and Depictions of Network Connectivity
Alberto Antonietti - Brain-inspired spiking neural network controller for a neurorobotic whisker system
Dennis Terhorst - Connecting NEST
Gianmarco Tiddia - MPI parallel simulation of a multi-area spiking network model using NEST GPU
Guido Trensch - A Neuromorphic Compute Node Architecture for Reproducible Hyper-Real-Time Simulations of Spiking Neural Networks
Barna Zajzon - Signal denoising through topographic modularity of neural circuits
Jasper Albers - beNNch – Finding Performance Bottlenecks of Neuronal Network Simulators
Mohamed Ayssar Benelhedi - Fully automated model generation in PyNEST
Han-Jia Jiang - Modeling spiking networks with neuron-glia interactions in NEST
Sepehr Mahmoudian - Neurobiologically-constrained neural network implementation of cognitive function processing in NEST – the MatCo12 model
Jessica Mitchell - New ways into NEST user documentation
Sebastian Spreizer - NEST Desktop: Explore new frontiers
Leonardo Tonielli - Incremental Awake-NREM-REM Learning Cycles: Cognitive and Energetic Effects in a Multi-area Thalamo-Cortical Spiking Model
Jose Villamar - NEST is on the road to GPU integration
Sebastian Spreizer - NEST Desktop: A “Let’s Play Together” for neuroscience
Sign in to the conference tools and prepare for the day.
Log in to the Mattermost chat to get the latest news, say "Hi" to everyone, check your Zoom audio settings, chat with others or explore the virtual conference location on GatherTown.
The neural encoding of visual features in primary visual cortex (V1) is well understood, with strong correlates to low-level perception, making V1 a suitable target for vision restoration through neuroprosthetics. However, the functional relevance of neural dynamics evoked through external stimulation directly imposed at the cortical level remains poorly understood. In the talk I will present results from our recent simulation study (Antolik et al. 2021) that combined (1) a large-scale spiking neural network model of cat V1 and (2) a virtual prosthetic system that drives in situ optogenetically modified cortical tissue with a matrix light stimulator. We designed a protocol for translating simple Fourier contrasted visual stimuli (gratings) into activation patterns of the optogenetic matrix stimulator. We characterised the relationship between the spatial configuration of the imposed light pattern and the induced cortical activity. Our simulations show that in the absence of visual drive (simulated blindness) optogenetic stimulation with a spatial resolution as low as 100 μm is sufficient to evoke activity patterns in V1 close to those evoked by normal vision. I will also discuss our recent unpublished effort to expand the simulations with neuron morphology dependent aspects of optogenetic light stimulation of neural tissue.
Spiking neural network simulations are establishing themselves as an effective tool for studying the dynamics of neuronal populations and the relationship between these dynamics and brain functions. Further, advances in computational resources and technologies are increasingly enabling large-scale simulations capable of describing brain regions in detail. NEST GPU [1,2] is a GPU-based simulator under the NEST Initiative written in CUDA-C++ for large-scale simulations of spiking neural networks. Here we evaluated its performance on the simulation of a multi-area model of macaque vision-related cortex [3, 4], made up of about 4 million neurons and 24 billion synapses. The outcome of the simulations is compared against that obtained using NEST 3.0 on a high-performance computing cluster. The results show an optimal match with the NEST statistical measures of neural activity, together with remarkable achievements in terms of simulation time per second of biological activity. Indeed, using 32 compute nodes equipped with an NVIDIA V100 GPU each, NEST GPU simulated a second of biological time of the full-scale macaque cortex model in its metastable state 3.1x faster than NEST running on the same number of compute nodes equipped with two AMD EPYC 7742 (2x64 cores).
NEST Desktop is a GUI for the NEST Simulator which guides users to understand the script code typical for NEST Simulator. The aim of NEST Desktop is an intuitive and easy-to-learn application for newcomers, but also attractive for experienced users.
In this workshop we will demonstrate novel and enhanced features of NEST Desktop learned from the users’ feedback. During the session you can explore the interface and study different use cases with NEST Desktop. You will be able to familiarize yourself with the interface and new features by learning examples of various scenarios.
Furthermore we prepared a video tutorial about the first steps in NEST Desktop [1]. It is designed to guide users to use NEST Desktop. We would like to produce more video tutorials and to collect interesting ideas for them, since they are a valuable aid in guiding beginners in classroom use cases. The workshop poses a highly promising format for a brainstorming among NEST experts. Therefore, we will have a discussion round at the workshop end to discuss more ideas for user tutorials.
Another topic of this discussion will be the future perspective of NEST Desktop and our development road-map with the upcoming features and collaborations.
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model for studying active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Human Brain Project's Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was adequately connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.
A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data: distribution and morpho-electric properties of different neuron types in V1; connection probabilities, synaptic weights, axonal delays, dendritic targeting rules inferred from a thorough survey of the literature; and a representation of visual inputs into V1 from the lateral geniculate nucleus. The model is shared freely with the community via brain-map.org, as are the datasets it is based on. We also openly share our software tools: the Brain Modeling ToolKit (BMTK) – a software suite for model building and simulation – and the SONATA file format. These tools leverage the excellent simulation capabilities of NEST, and work is ongoing to establish closer integration with it. We will discuss applications of our V1 model at different levels of resolution to various problems of broad interest, how this is enabled by NEST and our tools, and the opportunities this provides to the computational neuroscience community.
Meeting of the NEST Initiative members
It has long been rumored that Deep Learning in its current form is growing infeasible, and increasingly new methods are being explored. One prominent idea is to look at the brain for inspiration because there low energy consumption and fast reaction times are of critical importance. A central aspect in neural processing and also neuromorphic systems is the usage of spikes as a means of communication. However, the discrete and therefore discontinuous nature of spikes long made it difficult to apply optimization algorithms based on differentiable loss functions, which could only be bypassed by reverting to approximative methods.
Our solution for this problem is to operate on spike timings as these are inherently differentiable. We sketch the derivation of an exact learning rule for spike times in networks of leaky integrate-and-fire neurons, implementing error backpropagation in hierarchical spiking networks. Furthermore, we present our implementation on the BrainScaleS-2 neuromorphic system and demonstrate its capability of harnessing the system's speed and energy characteristics. We explicitly address issues that are also relevant for both biological plausibility and applicability to neuromorphic substrates, incorporating dynamics with finite time constants and optimizing the backward pass with respect to substrate variability.
Our approach shows the potential benefits of using spikes to enable fast and energy efficient inference on spiking neuromorphic hardware: on the BrainScaleS-2 chip, we classify the whole MNIST test data set with an energy per classification of 8.4uJ.
Despite the great strides neuroscience has made in recent decades, the underlying principles of brain function remain largely unknown. Advancing the field strongly depends on the ability to study large-scale neural networks and perform complex simulations. In this context, simulations in hyper-real-time are of high interest, but even the fastest supercomputer available today is not able to meet the challenge of accurate and reproducible simulation with hyper-real acceleration. The development of novel neuromorphic computer architectures holds out promise. Advances in System-on-Chip (SoC) device technology and tools are now providing interesting new design possibilities for application-specific implementations. We propose a novel hybrid software-hardware architecture approach for a neuromorphic compute node intended to work in a multi-node cluster configuration [1]. The node design builds on the Xilinx Zynq-7000 SoC device architecture that combines a powerful programmable logic gate array (FPGA) and a dual-core ARM Cortex-A9 processor extension on a single chip [2]. Although high acceleration can be achieved at low workloads, the development also reveals current technological limitations that also apply to CPU implementations of neural network simulation tools.
Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear.
In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a large modular circuit of spiking neurons comprising multiple sub-networks. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system.
Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system's behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. We show that this is a robust and generic structural feature that enables a broad range of behaviorally-relevant operating regimes: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others, resembling winner-take-all circuits; and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.
NEST Simulator runs on a multitude of different hardware platforms and operating systems. In the past year huge advances in infrastructure and deployments have made NEST available through different channels [1, 2, 3, 4], each offering unique features for various use-cases. In this talk we will highlight some of the available possibilities and give an overview of the tool and service interoperability in the NEST ecosystem. This covers tools closely related to NEST itself, workflow and development tools, as well as services on cloud [4, 5] or HPC resources.
Computational neuroscientists have not yet agreed on a common way to describe high-level connectivity patterns in neuronal network models. Furthermore, different studies use different symbols to represent connectivity in network diagrams. This diversity of connectivity descriptions and depictions makes it more difficult to understand and reproduce modeling results. This issue is compounded by the fact that certain aspects of the connectivity that would be necessary for its unambiguous interpretation, such as whether self-connections are allowed, are sometimes omitted from descriptions. A review of published models from the databases ModelDB [1] and Open Source Brain [2] reveals that, despite models mostly still having simple connectivity, ambiguities in their description and depiction are not uncommon. From the use of connectivity in existing models, along with a review of simulation software (e.g., NEST [3]) and specification languages (e.g., CSA [4]), we derive a set of connectivity concepts for which we propose unified terminology with precise mathematical meanings [5]. We further propose a graphical notation to represent connectivity in network diagrams. These standardized descriptions and depictions enable modelers to specify connectivity concisely and unambiguously. Moreover, the derived concepts may serve to guide the implementation and naming of high-level connection routines in simulators like NEST.
The balance of excitation and inhibition in neuronal circuits is essential for stable dynamics. This is probably why various brain regions show distinct and highly conserved ratios of excitatory and inhibitory neurons. However, it is unclear if biological neuronal networks with artificial ratios of inhibitory and excitatory neurons would exhibit changes in dynamics. Moreover, it is unclear whether the artificial ratios would jeopardize the balance of excitation and inhibition on a synaptic level. To investigate these questions, we recorded the Ca-activity of hippocampal cultures with various fractions
of inhibitory neurons. All cultures developed spontaneous network bursting. The cultures with various fractions of inhibitory neurons showed stable mean inter-burst intervals. However, the variance of inter-burst intervals grew with the number of inhibitory neurons. We reproduced the results of experiments in a model network of adaptive leaky integrate-and-fire neurons with different numbers of inhibitory neurons but the balanced numbers of excitatory and inhibitory synapses.
Overall, our results suggest that hippocampal cultures with various cellular compositions tend to maintain the balance of excitation and inhibition.
We propose a functional bio-inspired multi-area model in NEST [1] for motor control where the information is frequency-coded and exchanged between spiking neurons.
Our model consists of a controller, representing the central nervous system, and an effector, modelled as an arm and implemented with PyBullet [2] (Fig 1).
Different functional areas build up the controller, each one modelled with spiking neuronal populations, which we implemented ad-hoc to perform mathematical operations (e.g., Bayesian integration [3]). Additionally, to study cerebellar role in motor adaptation, we included a detailed model of the cerebellum [4], consisting of EGLIF neurons [5], and ad-hoc Spike-Timing-Dependent Plasticity rules [6].
Finally, to manage the communication between the brain and the arm, we make use of the MUSIC interface [7].
We used the model for the control of a single degree of freedom in the elbow joint. Preliminary simulations show proper signals transmission among areas in the model, bioinspired encoding/decoding of end-effector signals, and learning capability driven by the cerebellum. Finally, the MPI-based setup enables the use of distributed resources (i.e., we tested the system with 10 parallel MPI processes). This allows to address the computational requirements of simulations, facilitating also the control of multiple DoFs in future studies.