NEST Conference 2022

Europe/Berlin
Virtual

Virtual

Description

The NEST Initiative is excited to invite everyone interested in Neural Simulation Technology and the NEST Simulator to the virtual NEST Conference 2022. The NEST Conference provides an opportunity for the NEST Community to meet, exchange success stories, swap advice, learn about current developments in and around NEST spiking network simulation and its application. We particularly encourage young scientists to participate in the conference!

The virtual NEST Conference 2022

The NEST Conference 2022 will again be held as a virtual event on

Thursday/Friday 23/24 June

 

Registration and Call for Contributions is closed.

 

Programme

Keynotes

Ján Antolík - Simulation of optogenetic based visual prosthetic stimulation in V1

Anton Arkhipov - Bio-realistic models of cortical circuits as a freely shared platform for discovery

Benedetta Gambosi - A spiking neural network control system for the investigation of sensori-motor protocols in neurorobotic simulations.

Julian Göltz & Laura Kriener - Fast and energy-efficient neuromorphic deep learning with first-spike times

Anna Levina - Balance and adaptation in neuronal systems

Talks

Sacha J. van Albada - Unified Descriptions and Depictions of Network Connectivity

Alberto Antonietti - Brain-inspired spiking neural network controller for a neurorobotic whisker system

Dennis Terhorst - Connecting NEST

Gianmarco Tiddia -  MPI parallel simulation of a multi-area spiking network model using NEST GPU

Guido Trensch - A Neuromorphic Compute Node Architecture for Reproducible Hyper-Real-Time Simulations of Spiking Neural Networks

Barna Zajzon - Signal denoising through topographic modularity of neural circuits

Posters

Jasper Albers - beNNch – Finding Performance Bottlenecks of Neuronal Network Simulators

Mohamed Ayssar Benelhedi - Fully automated model generation in PyNEST

Han-Jia Jiang - Modeling spiking networks with neuron-glia interactions in NEST

Sepehr Mahmoudian - Neurobiologically-constrained neural network implementation of cognitive function processing in NEST – the MatCo12 model

Jessica Mitchell - New ways into NEST user documentation

Sebastian Spreizer - NEST Desktop: Explore new frontiers

Leonardo Tonielli - Incremental Awake-NREM-REM Learning Cycles: Cognitive and Energetic Effects in a Multi-area Thalamo-Cortical Spiking Model

Jose Villamar - NEST is on the road to GPU integration

Workshops

Sebastian Spreizer - NEST Desktop: A “Let’s Play Together” for neuroscience

  • Thursday 23 June
    • 09:00 09:15
      Registration 15m

      Sign in to the conference tools and prepare for the day.
      Log in to the Mattermost chat to get the latest news, say "Hi" to everyone, check your Zoom audio settings, chat with others or explore the virtual conference location on GatherTown.

    • 09:15 09:35
      Welcome & Introduction 20m
      Speaker: Hans E. Plesser ( Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway; Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany)
    • 09:35 10:20
      Keynote
      • 09:35
        Simulation of optogenetic based visual prosthetic stimulation in V1 45m

        The neural encoding of visual features in primary visual cortex (V1) is well understood, with strong correlates to low-level perception, making V1 a suitable target for vision restoration through neuroprosthetics. However, the functional relevance of neural dynamics evoked through external stimulation directly imposed at the cortical level remains poorly understood. In the talk I will present results from our recent simulation study (Antolik et al. 2021) that combined (1) a large-scale spiking neural network model of cat V1 and (2) a virtual prosthetic system that drives in situ optogenetically modified cortical tissue with a matrix light stimulator. We designed a protocol for translating simple Fourier contrasted visual stimuli (gratings) into activation patterns of the optogenetic matrix stimulator. We characterised the relationship between the spatial configuration of the imposed light pattern and the induced cortical activity. Our simulations show that in the absence of visual drive (simulated blindness) optogenetic stimulation with a spatial resolution as low as 100 μm is sufficient to evoke activity patterns in V1 close to those evoked by normal vision. I will also discuss our recent unpublished effort to expand the simulations with neuron morphology dependent aspects of optogenetic light stimulation of neural tissue.

        Speaker: Ján Antolik (Charles University)
    • 10:20 10:30
      Group Photo 10m
      Speaker: Hans E. Plesser ( Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway; Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany)
    • 10:30 10:55
      Talks
      • 10:30
        MPI parallel simulation of a multi-area spiking network model using NEST GPU 25m

        Spiking neural network simulations are establishing themselves as an effective tool for studying the dynamics of neuronal populations and the relationship between these dynamics and brain functions. Further, advances in computational resources and technologies are increasingly enabling large-scale simulations capable of describing brain regions in detail. NEST GPU [1,2] is a GPU-based simulator under the NEST Initiative written in CUDA-C++ for large-scale simulations of spiking neural networks. Here we evaluated its performance on the simulation of a multi-area model of macaque vision-related cortex [3, 4], made up of about 4 million neurons and 24 billion synapses. The outcome of the simulations is compared against that obtained using NEST 3.0 on a high-performance computing cluster. The results show an optimal match with the NEST statistical measures of neural activity, together with remarkable achievements in terms of simulation time per second of biological activity. Indeed, using 32 compute nodes equipped with an NVIDIA V100 GPU each, NEST GPU simulated a second of biological time of the full-scale macaque cortex model in its metastable state 3.1x faster than NEST running on the same number of compute nodes equipped with two AMD EPYC 7742 (2x64 cores).

        Speaker: Gianmarco Tiddia (Department of Physics, University of Cagliari, Monserrato (CA), Italy and Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Monserrato (CA), Italy)
    • 10:55 11:15
      Short break 20m
    • 11:15 12:45
      Workshop
      • 11:15
        NEST Desktop: A “Let’s Play Together” for neuroscience 1h 30m

        NEST Desktop is a GUI for the NEST Simulator which guides users to understand the script code typical for NEST Simulator. The aim of NEST Desktop is an intuitive and easy-to-learn application for newcomers, but also attractive for experienced users.

        In this workshop we will demonstrate novel and enhanced features of NEST Desktop learned from the users’ feedback. During the session you can explore the interface and study different use cases with NEST Desktop. You will be able to familiarize yourself with the interface and new features by learning examples of various scenarios.

        Furthermore we prepared a video tutorial about the first steps in NEST Desktop [1]. It is designed to guide users to use NEST Desktop. We would like to produce more video tutorials and to collect interesting ideas for them, since they are a valuable aid in guiding beginners in classroom use cases. The workshop poses a highly promising format for a brainstorming among NEST experts. Therefore, we will have a discussion round at the workshop end to discuss more ideas for user tutorials.

        Another topic of this discussion will be the future perspective of NEST Desktop and our development road-map with the upcoming features and collaborations.

        Speaker: Sebastian Spreizer (University of Trier)
    • 12:45 13:45
      Lunch Break 1h
    • 13:45 14:10
      Talks
      • 13:45
        Brain-inspired spiking neural network controller for a neurorobotic whisker system 25m

        It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model for studying active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Human Brain Project's Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was adequately connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.

        Speaker: Alberto Antonietti (BBP/EPFL)
    • 14:10 14:22
      Poster: Poster teasers
      • 14:10
        Incremental Awake-NREM-REM Learning Cycles: Cognitive and Energetic Effects in a Multi-area Thalamo-Cortical Spiking Model 3m

        INTRODUCTION/MOTIVATION
        This work leverages Apical Isolation (AI) and Apical Drive (AD)[1][2] principles to induce in a model some of the favourable energetic and cognitive effects associated to NREM and REM sleep. Also, we follow the Apical Amplification (AA)[3] concept during awake learning. This way, we added REM to the brain states accessible to the thalamo-cortical spiking model[4][5] already capable of expressing realistic AWAKE and NREM brain dynamics.

        METHODS
        We developed a multi-area thalamo-cortical spiking model made of integrate-and-fire neurons: the thalamic layer provides perceptual input through contralateral feedforward connections to cortical areas, which gather and process such information by means of Spike-Timing-Dependent-Plasticity and Winner-Take-All Cell-Assemblies (CA) circuitry (Fig 1A).

        RESULTS-AND-DISCUSSION
        We found the optimal sleep stages duration is at 40s of NREM and 10s of REM (Fig. 1D), corresponding to a reduction of 22% in the network power consumption and an improvement of 1% in classification accuracy, in agreement with experimental data[8,11]; moreover sleep rhythms were found comparable with biological recordings[9] (Fig. 1B).
        Cell-Assemblies trained over different examples of the same digit are grouped together in each area during NREM whereas those belonging to different cortical areas are associated through REM-stage (Fig. 1C).
        Figure 1D demonstrates after-sleep cortico-cortical synapses homeostasis.

        Speaker: Mr Leonardo Tonielli (2Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Roma, Rome, Italy)
      • 14:13
        beNNch – Finding Performance Bottlenecks of Neuronal Network Simulators 3m

        Modern computational neuroscience seeks to explain the dynamics and function of the brain by constructing models with ever more biological detail. This can, for example, take the form of sophisticated connectivity schemes [1] or involve the simultaneous simulation of multiple brain areas [2]. To enable progress in these studies, the simulation of models needs to become faster, calling for more efficient implementations of the underlying simulators. Performance benchmarking guides software development since it is hard to predict the impact of algorithm adaptations on the performance of complex software such as neuronal network simulators [3]. The particular challenge for these simulators is that executing benchmarks naturally involves the simulation of a diverse range of network models as they may uncover different performance limitations due to their variation in size, synaptic density and distribution of delays [4]. In addition, maintaining an accessible library of past results while keeping track of metadata that specifies hardware, software, simulator and model configurations is a difficult task. Here, we introduce beNNch [5] – a recently developed framework for benchmarking neuronal network simulations – and walk through a typical use case, highlighting how it simplifies workflows and enables sustainable use of computing resources.

        Speaker: Jasper Albers (Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany)
      • 14:16
        NEST Desktop: Explore new frontiers 3m

        NEST Desktop [1] is a web-based graphical user interface (GUI) which comprises graphical elements for creating and configuring network models, running simulations in the NEST simulator, and visualizing and analyzing the results. It allows students to explore important concepts in computational neuroscience without the need to learn a simulator control language before hand.

        NEST Desktop separates the GUI from the simulation kernel, but it still needs a NEST Simulator on the user’s machine. Since the last release NEST Desktop is able to connect to the in-situ pipeline “Insite” [2], which allows to visualize data sets from an ongoing NEST simulation. This enhances the interactivity of NEST for large simulations on HPC facilities. Furthermore, it enables a parallel usage with ViSimpl [3] for better visualization of spatial networks or with the NeuroRobotics Platform [3] to perform experiments on (virtual) robots.

        In order to give students, teachers, and researchers installation-free access to the compute resources, we integrated NEST Desktop into the EBRAINS infrastructure [5]. The same code remains available as a stand-alone version of NEST Desktop for applications in teaching and training and installations at other sites.

        Speaker: Sebastian Spreizer (University of Trier)
      • 14:19
        NEST is on the road to GPU integration 3m

        Most of the Top500 computer systems and all of the upcoming exascale machines employ GPUs alongside CPUs. To get the most performance out of these architectures, simulation software requires efficient support for both processor types. Decades of simulator development enable the routine simulation of large-scale neuronal network models on thousands of many-core CPUs in parallel [1]; recent GPU implementations show highly competitive results [2, 3]. Here, we present our project to integrate NEST GPU (formerly NeuronGPU [3]) into the ecosystem of the CPU-based simulator NEST [4]. NEST GPU, written in CUDA-C++, lends itself to this integration due to a similar interface and a modular structure. The development will continue within the NEST Initiative under the same GitHub organization [5], although the codes themselves are still separate. We pursue the unified, community-centered workflow already pioneered by NEST: build processes, model development (NESTML [6]), documentation standards along with quality assurance through continuous integration. We are looking forward to a fruitful exchange between NEST and NEST GPU, enabling the optimization of simulator performance under the hood while providing a common frontend for users to seamlessly harness both CPUs and GPUs in the future.

        Speaker: Jose Villamar (Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany)
    • 14:22 14:30
      Contingency Break 8m
    • 14:30 15:30
      Poster: Poster session 1
    • 15:30 16:00
      Short break 30m
    • 16:00 16:45
      Keynote
      • 16:00
        Bio-realistic models of cortical circuits as a freely shared platform for discovery 45m

        A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data: distribution and morpho-electric properties of different neuron types in V1; connection probabilities, synaptic weights, axonal delays, dendritic targeting rules inferred from a thorough survey of the literature; and a representation of visual inputs into V1 from the lateral geniculate nucleus. The model is shared freely with the community via brain-map.org, as are the datasets it is based on. We also openly share our software tools: the Brain Modeling ToolKit (BMTK) – a software suite for model building and simulation – and the SONATA file format. These tools leverage the excellent simulation capabilities of NEST, and work is ongoing to establish closer integration with it. We will discuss applications of our V1 model at different levels of resolution to various problems of broad interest, how this is enabled by NEST and our tools, and the opportunities this provides to the computational neuroscience community.

        Speaker: Anton Arkhipov (Allen Institute)
    • 17:30 18:30
      NEST Initiative General Assembly 1h

      Meeting of the NEST Initiative members

      Speaker: Hans E. Plesser ( Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway; Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany)
    • 09:00 09:45
      Keynote
      • 09:00
        Fast and energy-efficient neuromorphic deep learning with first-spike times 45m

        It has long been rumored that Deep Learning in its current form is growing infeasible, and increasingly new methods are being explored. One prominent idea is to look at the brain for inspiration because there low energy consumption and fast reaction times are of critical importance. A central aspect in neural processing and also neuromorphic systems is the usage of spikes as a means of communication. However, the discrete and therefore discontinuous nature of spikes long made it difficult to apply optimization algorithms based on differentiable loss functions, which could only be bypassed by reverting to approximative methods.
        Our solution for this problem is to operate on spike timings as these are inherently differentiable. We sketch the derivation of an exact learning rule for spike times in networks of leaky integrate-and-fire neurons, implementing error backpropagation in hierarchical spiking networks. Furthermore, we present our implementation on the BrainScaleS-2 neuromorphic system and demonstrate its capability of harnessing the system's speed and energy characteristics. We explicitly address issues that are also relevant for both biological plausibility and applicability to neuromorphic substrates, incorporating dynamics with finite time constants and optimizing the backward pass with respect to substrate variability.
        Our approach shows the potential benefits of using spikes to enable fast and energy efficient inference on spiking neuromorphic hardware: on the BrainScaleS-2 chip, we classify the whole MNIST test data set with an energy per classification of 8.4uJ.

        Speakers: Julian Göltz (Kirchhoff-Institute for Physics, Heidelberg University; Department of Physiology, University of Bern), Laura Kriener (Department of Physiology, University of Bern; Kirchhoff-Institute for Physics, Heidelberg University)
    • 09:45 10:00
      Short break 15m
    • 10:00 11:40
      Talks
      • 10:00
        A Neuromorphic Compute Node Architecture for Reproducible Hyper-Real-Time Simulations of Spiking Neural Networks 25m

        Despite the great strides neuroscience has made in recent decades, the underlying principles of brain function remain largely unknown. Advancing the field strongly depends on the ability to study large-scale neural networks and perform complex simulations. In this context, simulations in hyper-real-time are of high interest, but even the fastest supercomputer available today is not able to meet the challenge of accurate and reproducible simulation with hyper-real acceleration. The development of novel neuromorphic computer architectures holds out promise. Advances in System-on-Chip (SoC) device technology and tools are now providing interesting new design possibilities for application-specific implementations. We propose a novel hybrid software-hardware architecture approach for a neuromorphic compute node intended to work in a multi-node cluster configuration [1]. The node design builds on the Xilinx Zynq-7000 SoC device architecture that combines a powerful programmable logic gate array (FPGA) and a dual-core ARM Cortex-A9 processor extension on a single chip [2]. Although high acceleration can be achieved at low workloads, the development also reveals current technological limitations that also apply to CPU implementations of neural network simulation tools.

        Speaker: Mr Guido Trensch (Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany)
      • 10:25
        Signal denoising through topographic modularity of neural circuits 25m

        Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear.

        In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a large modular circuit of spiking neurons comprising multiple sub-networks. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system.

        Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system's behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. We show that this is a robust and generic structural feature that enables a broad range of behaviorally-relevant operating regimes: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others, resembling winner-take-all circuits; and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

        Speaker: Barna Zajzon (Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany)
      • 10:50
        Connecting NEST 25m

        NEST Simulator runs on a multitude of different hardware platforms and operating systems. In the past year huge advances in infrastructure and deployments have made NEST available through different channels [1, 2, 3, 4], each offering unique features for various use-cases. In this talk we will highlight some of the available possibilities and give an overview of the tool and service interoperability in the NEST ecosystem. This covers tools closely related to NEST itself, workflow and development tools, as well as services on cloud [4, 5] or HPC resources.

        Speaker: Dennis Terhorst (Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience & Theoretical Neuroscience, Institute for Advanced Simulation (IAS-6) Jülich Research Centre, Member of the Helmholz Association and JARA,Forschungszentrum Jülich GmbH)
      • 11:15
        Unified Descriptions and Depictions of Network Connectivity 25m

        Computational neuroscientists have not yet agreed on a common way to describe high-level connectivity patterns in neuronal network models. Furthermore, different studies use different symbols to represent connectivity in network diagrams. This diversity of connectivity descriptions and depictions makes it more difficult to understand and reproduce modeling results. This issue is compounded by the fact that certain aspects of the connectivity that would be necessary for its unambiguous interpretation, such as whether self-connections are allowed, are sometimes omitted from descriptions. A review of published models from the databases ModelDB [1] and Open Source Brain [2] reveals that, despite models mostly still having simple connectivity, ambiguities in their description and depiction are not uncommon. From the use of connectivity in existing models, along with a review of simulation software (e.g., NEST [3]) and specification languages (e.g., CSA [4]), we derive a set of connectivity concepts for which we propose unified terminology with precise mathematical meanings [5]. We further propose a graphical notation to represent connectivity in network diagrams. These standardized descriptions and depictions enable modelers to specify connectivity concisely and unambiguously. Moreover, the derived concepts may serve to guide the implementation and naming of high-level connection routines in simulators like NEST.

        Speaker: Sacha J. van Albada (Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany)
    • 11:40 12:00
      Short break 20m
    • 12:00 12:25
      Keynote
      • 12:00
        Balance and adaptation in neuronal systems 25m

        The balance of excitation and inhibition in neuronal circuits is essential for stable dynamics. This is probably why various brain regions show distinct and highly conserved ratios of excitatory and inhibitory neurons. However, it is unclear if biological neuronal networks with artificial ratios of inhibitory and excitatory neurons would exhibit changes in dynamics. Moreover, it is unclear whether the artificial ratios would jeopardize the balance of excitation and inhibition on a synaptic level. To investigate these questions, we recorded the Ca-activity of hippocampal cultures with various fractions
        of inhibitory neurons. All cultures developed spontaneous network bursting. The cultures with various fractions of inhibitory neurons showed stable mean inter-burst intervals. However, the variance of inter-burst intervals grew with the number of inhibitory neurons. We reproduced the results of experiments in a model network of adaptive leaky integrate-and-fire neurons with different numbers of inhibitory neurons but the balanced numbers of excitatory and inhibitory synapses.
        Overall, our results suggest that hippocampal cultures with various cellular compositions tend to maintain the balance of excitation and inhibition.

        Speaker: Anna Levina (University of Tübingen, Mathematisch-Naturwissenschaftliche Fakultät, Fachbereich Informatik, Germany)
    • 12:25 13:30
      Lunch 1h 5m
    • 13:30 13:42
      Poster: Poster teasers
      • 13:30
        New ways into NEST user documentation 3m

        In recent months, the NEST user documentation website has seen some major overhauls to the structure and visual style. Here we present these changes to the community. The changes focus on discovering and accessing content in different ways, with the goal to make the available documentation more visible to users. A new theme is presented that modernizes the look and boosts discovery of docs. NEST thrives on our community to share ideas and contribute changes to code and documentation. With the updated layout, we also aim for a website that encourages involvement from the community. Participation in the development of code and documentation helps us continually improve and accommodate the needs of the community.

        Speaker: Jessica Mitchell
      • 13:33
        Modeling spiking networks with neuron-glia interactions in NEST 3m

        Recent experimental evidence suggests an active roles of astrocytes in a number of brain functions and demonstrates coordinated neuronal and astrocytic activity in vivo [1]. In the cortex, astrocytes form non-overlapping domains, each containing several hundreds of neurons and ~100,000 synapses [2]. Astrocytic processes are in close contact with synaptic terminals and affect synaptic transmission, plasticity, and neuronal excitability [3, 4]. Understanding the role of astrocytic mechanisms in brain functions and dysfunctions requires open-access tools for model implementation, simulation, and analysis. In the past decade, hundreds of new models with some form of neuron-astrocyte interaction dynamics have been proposed. However, their implementation is rarely shared and not sufficiently documented to reproduce the findings [4, 5]. We developed a new module in the NEST simulator that allows efficient implementation and simulation of large neuron-astrocyte populations. This includes an astrocyte model with internal calcium dynamics, a synapse model to communicate between astrocytes and postsynaptic neurons, and user-friendly and efficient high-level connectivity functions, which allow probabilistic or deterministic pairing of neurons and astrocytes. This new module will improve the convenience, reliability, and reproducibility of computational studies involving neuron-astrocyte interactions.

        Speaker: Han-Jia Jiang (Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany; Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany)
      • 13:36
        Neurobiologically-constrained neural network implementation of cognitive function processing in NEST – the MatCo12 model 3m

        In the project ‘MatCo: Material constraints enabling human cognition’, we use neurobiologically informed network models of cognition and language. These networks implement macroscopic cortical areas and their connectivity along with microscopic ones addressing the connectivity, functionality and plasticity of neurons[1]. Such brain constrained models enable studying cognition, natural language and their relationship to basic neuroscience principles and non-cognitive sensorimotor processes. The version discussed here, called ‘MatCo12’, implements 12 fronto-temporo-occipital areas relevant for language and cognition, and offers neurobiological accounts for example for neural changes following sensory deprivation[2] and for the learning of concrete and abstract concepts[3].

        The MatCo12 model was built with the FELIX simulator[4,5]. To make MatCo12 accessible to a wider audience and allow for faster and larger simulations, we implemented its core building blocks in NEST[6]. The neuron model is a point neuron with an internal adaption. The Hebbian synaptic learning rule[7] determines weight changes based on low-pass filtered activity of the presynaptic neuron and the membrane potential of the postsynaptic neuron at every time step. Consequently, long-term potentiation or long-term heterosynaptic or homosynaptic depression take place. We present results showing the functionality of Hebbian learning in the NEST implementation and show first results of large-scale network simulations.

        Speaker: Sepehr Mahmoudian (Freie Universität Berlin)
      • 13:39
        Fully automated model generation in PyNEST 3m

        NESTML is a concise modeling language for neuron and synapse models. It comes with a software toolchain to generate efficient simulation code for different target platforms.

        Previously, all used neuron-synapse combinations involving synapse models with a dependency on post-synaptic variables, such as spike-timing dependent plasticity (STDP), had to be provided manually to NESTML before running the simulation. We have now developed a just-in-time (JIT) framework that eliminates this step, by intercepting function calls in PyNEST and invoking the NESTML workflow and making all model classes available for use in the network simulation in a completely automated manner.

        One drawback of this approach, however, is that neuron parameters only become available after model instances have been connected, as connection with a synapse model like STDP might modify the parameters of a neuron model. Caching such attributes on the Python level at create-time could solve this problem, but doubles the amount of memory required. To overcome this issue, we instead have modified the data structures holding the model parameters in C++ by making the model independent of its parameters, which also opens up possibilities for future optimizations.

        Speaker: Mohamed Ayssar Benelhedi (JSC)
    • 13:42 14:00
      Contingency Break 18m
    • 14:00 15:00
      Poster: Poster Session 2
    • 15:00 15:45
      Keynote
      • 15:00
        A spiking neural network control system for the investigation of sensori-motor protocols in neurorobotic simulations. 45m

        We propose a functional bio-inspired multi-area model in NEST [1] for motor control where the information is frequency-coded and exchanged between spiking neurons.

        Our model consists of a controller, representing the central nervous system, and an effector, modelled as an arm and implemented with PyBullet [2] (Fig 1).

        Different functional areas build up the controller, each one modelled with spiking neuronal populations, which we implemented ad-hoc to perform mathematical operations (e.g., Bayesian integration [3]). Additionally, to study cerebellar role in motor adaptation, we included a detailed model of the cerebellum [4], consisting of EGLIF neurons [5], and ad-hoc Spike-Timing-Dependent Plasticity rules [6].

        Finally, to manage the communication between the brain and the arm, we make use of the MUSIC interface [7].

        We used the model for the control of a single degree of freedom in the elbow joint. Preliminary simulations show proper signals transmission among areas in the model, bioinspired encoding/decoding of end-effector signals, and learning capability driven by the cerebellum. Finally, the MPI-based setup enables the use of distributed resources (i.e., we tested the system with 10 parallel MPI processes). This allows to address the computational requirements of simulations, facilitating also the control of multiple DoFs in future studies.

        Speaker: Benedetta Gambosi (Politecnico di Milano)
    • 15:45 16:00
      Wrap-up 15m
      Speaker: Hans E. Plesser ( Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway; Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany)