NEST Conference 2024

Europe/Berlin
Virtual

Virtual

Description

The NEST Initiative is excited to invite everyone interested in Neural Simulation Technology and the NEST Simulator to the virtual NEST Conference 2024. The NEST Conference provides an opportunity for the NEST Community to meet, exchange success stories, swap advice, learn about current developments in and around NEST spiking network simulation and its application. We particularly encourage young scientists to participate in the conference!

The Virtual NEST Conference 2024

The NEST Conference 2024 will again be held as a virtual event on

Monday/Tuesday 17/18 June.

 

Program

The detailed program is available here: https://events.hifis.net/event/1168/timetable/#20240618.detailed 

Confirmed Keynote Speakers

Tadashi Yamazaki, University of Electro-Communications, Tokyo

Behnam Ghazinouri, Ruhr University Bochum

Benedetta Gambosi, Politecnico di Milano

Tobias Gemmeke, RWTH Aachen University

 

Thank you for attending the NEST Conference 2024! Looking forward to seeing you next year!

    • 08:45 09:00
      Registration Zoom

      Zoom

    • 09:00 09:15
      Welcome & introduction Zoom

      Zoom

      Convener: Hans Ekkehard Plesser (Norwegian University of Life Sciences)
    • 09:15 10:00
      Keynote Zoom

      Zoom

      • 09:15
        Diversity and inclusion: Distributed simulation of multiple brain and body models in multiple simulators on multiple computers across multiple organizations 45m

        NEST is the one of the most standard and widely used simulation environment. It is the first-class simulator in our project endorsed by Program for Promoting Researches on the Supercomputer Fugaku [1], where we develop a spiking network model of the mouse brain in NEST on Fugaku, and connect it to a musculoskeletal body model running on a local PC to realize a closed-loop brain-body simulation over the Internet while passing firewalls. Moreover, we include another brain model written in C++ with CUDA on a local GPU machine into the closed loop. To realize this, we needed a way to connect multiple models implemented in multiple simulators running on multiple computers across multiple organizations. We solved this by using Robot Operating System (ROS) [2], which is a de-facto standard communication framework used in the field of robotics, and rosbridge, which encapsulates ROS messages with JSON and transfers via websocket to pass firewalls. Specifically, we developed a C++ library for rosbridge that allows simulators written in C/C++ to communicate over rosbridge. By using those technologies, we were able to realize a closed-loop simulation among a cortico-basal ganglia-thalamus model in NEST on Fugaku, a mouse body model in Gazebo on a local PC, and a cerebellar model in C++ with CUDA on another local GPU machine [3]. These results suggest that ROS and rosbridge can provide more flexibility to and enhance the versatility of NEST.

        Speaker: Tadashi Yamazaki (The University of Electro-Communications)
    • 10:00 10:20
      Talks Zoom

      Zoom

      • 10:00
        A bottom-up, mesoscale approach for the study of the claustrum function 20m

        The claustrum, a structure having extensive connectivity with the rest of the brain and being involved in many high-cognitive processes, is still one of the least understood parts of the mammalian nervous system. One of the reasons is its complex location and geometry: a folded, thin layer of neurons, sandwiched between other cellular groups and white matter tracts, which creates specific challenges for experimentation. However, in recent years the claustrum has been studied intensely in mice, revealing many details about its cellular composition and dynamics, but still without a satisfactory mechanistic explanation of its function.

        This work investigates through computational simulations the dynamics of the interaction between the claustrum and the cortex. To this end, we built a bottom-up, mesoscale in-silico model of the mouse claustrum that we reciprocally connected with a simplified model of the cortex. Specifically, we used NEST and NESTML to create AEIF neurons (Brette and Gerstner, 2005) for the claustrum and Wang-Buzsaki cortical neurons with difference-of-exponentials time-course synaptic conductances (Wang and Buzsaki, 1996; Palmigiano et al, 2017). From this work in progress we will present how we reached with NEST to the sets of parameters that replicate the responses of claustrum neurons in vitro, their arrangement in space and their measured connectivity (Kim et al, 2016; White and Mathur 2018; Graf et al, 2020). The replication of the Palmigiano et al. 2017 network allowed the production of a complex, cortical-like signal. Furthermore, we will present preliminary results of the interaction between the claustrum and the cortex.

        Speaker: Dr Razvan Gamanut (Okinawa Institute of Science and Technology Graduate University)
    • 10:20 10:35
      Group photo & short break Zoom

      Zoom

    • 10:35 11:15
      Talks Zoom

      Zoom

      • 10:35
        70 Years of Spiking Network Simulations: Past, Present, Perspectives 20m

        The 70th anniversary of the first simulation of a spiking neuronal network by Farley and Clark (Proceedings of the 1954 Symposium on Information Theory, Institute of Radio Engineers) provides a good opportunity to take stock of the development of spiking network simulations over the past seven decades, chart present practices in the field and develop perspectives for the future scientific practice in the field. Current practice shows an interesting division: Modeling of networks of morphologically detailed cells appears to be almost entirely based on the NEURON simulator with ModelDB as the central, neuroscientifically curated and actively used model repository. In contrast, for networks of point neurons, in spite of some integrative efforts (eg Brette et al, 2007), a wide range of simulation tools exist and furthermore, many scientific publications are still based on hand-crafted code in generic programming languages, with models often at best shared via author-maintained Github repositories. Computational neuroscience of spiking network models is furthermore dominated by studies based on small networks of a few thousand neurons, even though data-driven models of millions of neurons are publically available and simulation technology has been capable to simulate networks of billions of neurons for a decade. I would like to stimulate a discussion on these observations, their causes, consequences and—if deemed necessary and desirable—steps towards a better computational neuroscience practice.

        Speaker: Hans Ekkehard Plesser (Norwegian University of Life Sciences; Forschungszentrum Jülich; Käte Hamburger Kolleg RWTH Aachen)
      • 10:55
        An approach to handle provenance-tracked analysis of NEST simulations using Alpaca 20m

        NEST simulations are typically executed by a script that configures and runs the simulation. Despite recent improvements in NEST 3.x, where file headers specify the detailed origin of the outputs, users still must interpret the data with respect to the simulation setup. This information is difficult to convey, especially in collaborative contexts with shared simulation results. Moreover, during the explorative process of scientific discovery, results may change without warning when details of the simulation are changed, which could lead to wrong interpretations by collaborators who are unaware of such changes. Therefore, we face two challenges: results are stored in data objects without metadata that describe their role in the simulation, and the simulation outputs are not linked to a description of their provenance with respect to the simulation building.

        Here we present concepts to tackle both challenges when using the NEST Python interface. We consider a typical simulation experiment and subsequent data analysis using the Elephant (doi:10.5281/zenodo.1186602; RRID:SCR_003833) toolbox [1]. First, we show how data from a NEST simulation can be represented with data objects annotated with simulation details using the Neo library [2]. Second, we demonstrate how the software Alpaca (doi:10.5281/zenodo.10276510; RRID:SCR_023739) can capture workflow provenance when running a simulation (see Figure) [3]. The two approaches allow the semantic description of the simulation experiment that contributes to the FAIR principles [4] by improving the findability of results through detailed provenance, supporting interoperability through a standardized data model, and promoting reuse of simulation data through enhanced data description.

        Speaker: Cristiano Köhler (Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany and RWTH Aachen University, Aachen, Germany)
    • 11:15 11:30
      Poster teasers Zoom

      Zoom

      • 11:15
        Exploiting network structure in NEST: Efficient communication in brain-scale simulations 3m

        The communication of spike events constitutes a major bottleneck in simulations of brain-scale networks with realistic connectivity. Models such as the multi-area model1 not only have a dense connectivity within areas but also between areas. Synaptic transmission delays within an area can be as short as 0.1 ms and therefore simulations require frequent spike communication between compute nodes to maintain causality in the network dynamics2. This poses a challenge to the conventional round-robin scheme used to distribute neurons uniformly across compute nodes disregarding the network’s specific topology.
        We target this challenge and propose a structure-aware neuron distribution scheme along with a novel spike-communication framework that exploits this approach in order to make communication in large-scale distributed simulations more efficient. In the structure-aware neuron distribution scheme, neurons are placed on the hardware in a way that mimics the network’s topology. Paired with a communication framework that distinguishes local short delay intra-area communication and global long delay inter-area communication, the structure-aware approach minimizes the costly global communication and thereby reduces simulation time. Our prototype implementation is fully tested and was developed within the neuronal simulator tool NEST3.
        For the benchmarking of our approach, we developed a multi-area model that resembles the macaque multi-area model in terms of connectivity and work load, while being more easily scalable as it retains constant activity levels. We show that the new strategy significantly reduces communication time in weak-scaling experiments and the effect increases with an increasing number of compute nodes.

        Speaker: Melissa Lober (Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany)
      • 11:18
        Continuous benchmarking of brain-research simulation code: Keeping pace with an evolving ecosystem of models and technologies 3m

        In computational neuroscience, systematic performance monitoring of simulation code is challenging due to continuous technological advancements and an ever evolving zoo of neural systems models. Albers et al.[1] described generic principles for efficient benchmarking workflows and developed the open-source framework BeNNch streamlining the process for neural simulators, such as NEST[2]. Here, we extend this work by integrating the framework into a continuous benchmarking workflow. We present solutions to automatically construct all necessary scripts, execute all runs and centrally aggregate all results. New user-defined configurations are derived from prior studies, facilitating reproducibility and reducing error-proneness. Comparability across platforms and versions of code is achieved through a unified methodology for recording benchmark data and metadata. Our approach enables continuous benchmarking of simulation tools and thus early detection of performance degradation. Using NEST as example code, we demonstrate the potential of our continuous benchmarking workflow for advancing simulation technology for brain research.

        Speaker: Jan Vogelsang (Forschungszentrum Jülich - PGI-15)
      • 11:21
        NEST Replication of a Brain-Constrained Model of Semantic Grounding with Spiking Neurons and Brain-like Connectivity 3m

        This work presents a first step towards scaling up a previously published brain-constrained model of semantic grounding. The original model explored the neural mechanics of category-specific cell assembly formation through the learning of action and object words (Tomasello et al., 2018). This initial phase focuses on replicating the original findings in NEST, to lay the groundwork for future expansion, including scaling the network.
        The ported model replicates the original 12-area structure original auditory and visual processing as well as articulatory motor and hand motor areas with three regions per system, for a total of 12 areas, each containing 625 excitatory and 625 inhibitory neurons. The model implements several brain constraints: (i) within-area connections are sparse and local, (ii) between-area connections are implemented in accordance with neuroanatomical studies, (iii) synaptic weights are modified by Hebbian learning rules of long-term potentiation and long-term depression, (iv) excitatory neurons are spiking and noisy and (v) neural activity is regulated through local and global mechanisms. The model was trained on correlated patterns in primary sensorimotor ‘cortices’, with patterns either encoding action or object words. Distributed but discrete cell assembly circuits emerged with category-specific topographies. Differences were found in the motor and visual cortices, and, notably, in highly connected ‘semantic hub’ areas that integrate information from various modalities. Our results indicate that both semantic hubs and category-specific areas emerge from the interplay of neuroanatomical connectivity and correlated neuronal activity during language learning, offering a neuromechanistic explanation for various findings on semantic grounding.

        Speaker: Maxime Carrière (Freie Universität Berlin)
      • 11:24
        Event-Based Eligibility Propagation with Additional Biologically Inspired Features 3m

        Recent advances in neural plasticity research have broadened the foundational Hebbian concept by integrating additional modulating factors. Among these, eligibility propagation (e-prop) stands out as a novel approach, initially devised as an online approximation to backpropagation through time (BPTT) [1]. In this study, we present a series of novel strategies that introduce additional bio-inspired features to e-prop. Our modifications not only contribute to the realism with which e-prop mimics biological processes but also facilitate its implementation in large-scale spiking neural network simulations, thereby establishing its significance for computational neuroscience.

        Our study demonstrates that the learning performance achieved with the modified e-prop method is on par with the original e-prop approach. We highlight the seamless integration of e-prop into NEST's event-driven framework for synapses, contrasting it with the original time-driven implementation. This adaptation expands e-prop's applicability for studying learning processes across biological and artificial neural networks, suggesting a broader utility in the field.

        We delineate our methodological adaptations and their scalability for large-scale network simulations. Through strong- and weak-scaling analyses, we demonstrate how e-prop in NEST scales effectively for larger networks.

        Speaker: Jesus Andres Espinoza Valverde (School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany)
    • 11:30 11:40
      Contingency break 10m
    • 11:40 12:40
      Posters Gathertown

      Gathertown

      Conveners: Melissa Lober (Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany), Jan Vogelsang (Forschungszentrum Jülich - PGI-15), Maxime Carrière (Freie Universität Berlin), Jesus Andres Espinoza Valverde (School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany)
    • 12:40 13:40
      Lunch break 1h
    • 13:40 14:40
      Talks Zoom

      Zoom

      • 13:40
        From Corticogenesis to Functional Networks 20m

        The development and function of the cerebral cortex of the mammalian brain is a complex orchestration of cellular dynamics leading to a highly specialised structure. The present study explores the development of a gene regulatory network that abstracts the underlying DNA and genetic expression responsible for this anatomical process. An agent-based model is created in the high performance software, BioDynaMo, to model the 3D spatial formation of the neocortex. A laminated structure of neuronal cell bodies is produced through stochastic cell fate determination and cell numbers are verified. Multicompartmental neurons are grown using local guidance cues to generate realistic circuit morphologies. Entire cortical columns are simulated with the potential for multicolumn connectivity analysis.

        When stimulated in NEST, these spatially informed circuits are found to produce homeostatic network dynamics through realistic afferent connectivity and input regimes. Synaptic weights are updated through a BCM based approach to produce networks with realistic cortical activity. This modelling approach allows investigations into the effects of each stage of development and the emergence of functional circuitry in the cortex. Initial analysis is carried out on network motifs and encoding of synthetic stimuli showcasing emergent computational units. The networks grown mimic canonical microcircuit connectivity. A full study is underway to analyse the emergent functional circuits that can be grown in this realistic corticogenesis simulation. The model is set up to validate a set of hypotheses regarding emergent circuitry, electrophysiology and also the effect of activity during development in the cortex.

        Speaker: Umar Abubacar (University of Surrey)
      • 14:00
        Introducing NEST Desktop v4.0 20m

        NEST Desktop is a web-based GUI application for NEST Simulator [1, 2]. It has been established as an useful tool to guide students and newcomers to learn the concept of computational neuroscience exploring the behavior of neuron models or network dynamics.

        The latest release (v3.3) provides the more models, e.g. multi-compartmental models or even synaptic models for plasticity (STDP, Tsodyks). These virtual experiments can be performed on local machine, on JSC as Jupyter proxy extension or on public infrastructure on EBRAINS [3]. Furthermore, the app is collaborated with various projects such as Insite (activity during live simulation) [4] NeuroRoboticPlatform (NRP) [5] and ViSimpl (a visualization application) [6].

        I will talk about the current development of NEST Desktop (v4.0), especially about the integration plan of human multi-area cortex models (HuMAM) [7] in NEST Desktop where the user can simulate and large-scale network dynamics of various human brain areas with NEST and analyzed it with Elephant [8]. For this purpose, a hierarchical network structure is embodied in NEST Desktop.

        In general, NEST Desktop is re-written in a new framework and is designed as a plugin-based architecture. With this concept other spiking-network simulation tools, e.g. Norse [9], PyNN [10], can be used as plugin for the front-end with corresponding back-end.

        Speaker: Dr Sebastian Spreizer (University of Trier; Forschungszentrum Jülich)
      • 14:20
        A simplified model of NMDA-receptor-mediated dynamics in leaky integrate-and-fire neurons 20m

        A model for NMDA-receptor-mediated synaptic currents generating persistent activity proposed by Wang and Brunel [1-3] has been widely adopted in computational neuroscience, both for spiking-neuron and mean-field models [1-4]. The model describes synaptic dynamics by a phenomenological two-dimensional nonlinear ODE system for the gating variable S(t). Due to the nonlinearity, the pre-synaptic gating variables of a post-synaptic neuron cannot be simulated in aggregated form. Numerically efficient solutions are only feasible for fully connected networks with identical, short delays (see e.g. [5]).

        We derive a linear approximation to Wang’s model which allows us to integrate all NMDA input currents to a neuron in aggregate form as for linear synapses. Using a reference implementation in NEST, we show that the approximation is accurate and that a network model based on the approximation shows the same decision making dynamics as one using Wang’s original model. For an example network with around 8000 neurons, the approximation is about 30 times faster, and scales sublinearly with the number of synapses.

        Exploiting the flexibility and performance gained through the approximation, we investigate the dynamics of a binary decision-making network with sparse connectivity and randomized delays.

        Speaker: Jan Eirik Skaar (Norwegian University of Life Sciences)
    • 14:40 14:50
      Short break 10m
    • 14:50 15:10
      Talks Zoom

      Zoom

      • 14:50
        CA1 Human Hippocampus Model 20m

        We developed a biologically grounded model of the human hippocampus CA1 region. The model includes all the pyramidal cells and interneurons with a realistic number of cells (about 5 millions) and their connections (31 billions). The connectivity matrix was generated using previously published methods [1] and stored in the SONATA data format. The model is set to run in the NEST simulator using a previously published adaptive leak integrate and fire neuron model [2] able to faithfully reproduce the spike trains observed in vitro and in vivo. Here, we present the NEST implementation choices that we had to take to run our model on the supercomputer Galileo100 at CINECA facility in Italy.

        Speaker: Sergio Mauro Gavino Solinas (University of Sassari)
    • 15:10 15:55
      Keynote Zoom

      Zoom

      • 15:10
        A multiarea model predicts the changes in thalamocortical beta oscillations caused by dopamine depletion in basal ganglia and cerebellum 45m

        Research on parkinsonism underscored the central roles of basal ganglia alterations and dopamine level reductions in symptom emergence [7]. Recent studies, however, hint at cerebellar involvement in altered parkinsonian brain activity [8, 2]. To unravel the role of this region in parkinsonism, we developed an innovative multiscale, multiarea brain model, aiming to investigate neural dynamics in both healthy and parkinsonian states. This model integrates microcircuits of the BG [4] and cerebellum [3], employing spiking neural networks (SNN), which also simulate dopamine depletion mechanisms, and includes a three-equation mass model of the cortex, thalamus and reticular nucleus, reproducing the loops these areas engage in [9]. After validation with respect to the stand-alone SNN circuits, [3, 4], we tested the model first in a generic motor state. We found that the resemblance between our simulations and experimental data [5, 6], as indicated by matching population firing rates and enhanced beta band oscillations, was notably more pronounced when dopamine-depletion effects occurred in both the cerebellum and BG (compared to BG alone), emphasizing a more direct involvement of the cerebellum in parkinsonism. Lastly, we simulated a behavior protocol, eyeblink classical conditioning, incorporating plastic mechanisms at the cerebellar level [1] under both physiological and pathological conditions. Results indicate that despite altered cellular structure, the cerebellum exhibits adaptive capabilities, albeit with reduced effectiveness compared to the physiological state. In summary, our findings stress the significance of recognizing the cerebellum’s role in parkinsonism to fully grasp the intricate neural mechanisms underlying the related disorders.

        Speaker: Benedetta Gambosi (Politecnico di Milano)
    • 15:55 17:00
      Mingle Gathertown

      Gathertown

    • 17:30 18:30
      Workshop: NEST Initiative General Assembly Different Zoom room that will be shared with NI members

      Different Zoom room that will be shared with NI members

      Convener: Hans Ekkehard Plesser (Norwegian University of Life Sciences)
    • 09:00 09:45
      Keynote Zoom

      Zoom

      • 09:00
        Accelerated Simulation of Biological Neural Networks 45m

        After many years of research in neuroscience, the manifestation of intelligence in spiking neurons remains a puzzle. Computational neuroscience has the potential to uncover the underlying principles but is burdened by the computational complexity of executing biological neural network simulations of sufficient size and realism.

        Varying efforts have addressed the challenge of acceleration applying software as well as hardware design optimizations. Thereby a classic gap opens between the flexibility and usability of a platform versus the performance it provides.

        In 2018, we had the opportunity to take a fresh look at the challenge at hand assessing existing system solutions as well as gained knowledge in the domain of neuroscience. It quickly became clear, that any platform to be successful has to combine all three aspects: the usability for neuroscientist, flexibility to accommodate new models and insights as well as sufficient computing performance to handle reasonably large networks as well as to provide insights into slow plasticity processes.

        In this talk, I will review key requirements put on such a system from a neuroscience but also from an engineering perspective. Thereby a focus is put on the three key bottlenecks: communication, data access as well as numeric updates of the model equations. Next, follows a discussion on the potential of existing architectural concepts to address these. Finally, the real-world neuroAIx FPGA-cluster with its 20x speed-up is juxtaposed to its blue-print promising a 100x acceleration.

        Speaker: Tobias Gemmeke (IDS, RWTH Aachen University)
    • 09:45 10:25
      Talks Zoom

      Zoom

      • 09:45
        Modeling and simulating spiking neurons with NESTML on NEST GPU 20m

        NESTML is a domain-specific modeling language for neuron models and synaptic plasticity rules [1]. It is designed to support researchers in computational neuroscience by allowing them to specify models in a precise and intuitive way. These models can subsequently be used in dynamical simulations of small or large-scale spiking neural networks, using high-performance simulation code generated by the NESTML toolchain. The code extends a simulation platform (such as NEST [2]) with new and easy-to-specify neuron and synapse models, formulated in NESTML.

        NESTML was originally developed for NEST simulations to be executed on CPUs; here we extend it with support for GPU-based simulation for the NEST-GPU target platform [3]. We demonstrate our approach with code generation for a balanced random network of integrate-and-fire neurons with postsynaptic currents in the form of decaying exponential kernels. The dynamics of the network are solved using exact integration [4]. We validate the results through a comparison of statistical properties of the network dynamics against those obtained from NEST running on CPUs as a reference.

        By virtue of using NESTML, neuroscientists have to write the model code only once and have the same models run on multiple target platforms without having to write any additional code. This alleviates the need for neuroscientists to write CUDA/C++ code for GPUs, with its attendant pitfalls. This improves model reuse and scientific reproducibility.

        Speaker: Ms Pooja Babu (Simulation & Data Lab Neuroscience, Institute for Advanced Simulation, Jülich Supercomputing Centre (JSC), Jülich Research Centre, Jülich, Germany; Software Engineering, Software Engineering, RWTH Aachen University, Germany)
      • 10:05
        Enabling brain-scale spike-timing dependent plasticity by incorporating axonal and dendritic transmission delays in massively parallel spiking neural network simulations 20m

        NEST, a distributed neural network simulator, is capable of simulating large, sparsely interconnected networks, wherein axons and dendrites are represented as simple transmission delays. The synapses in these networks can incorporate plasticity mechanisms, including the widely used spike-timing dependent plasticity (STDP). Presently, NEST employs purely dendritic delays, which are suitable for networks up to approximately one cubic millimeter. To accommodate larger networks and enhance the accuracy of STDP weight dynamics, the specification of both dendritic and axonal delays is crucial. However, the introduction of axonal delays presents a causality dilemma, as pre-synaptic spikes may be processed internally prior to reaching the synapse, necessitating knowledge of future post-synaptic spikes. Several strategies to circumvent this issue are explored, with one requiring minimal alterations to the existing code and others involving a comprehensive overhaul of low-level simulator code for a cleaner solution. The most promising strategies are assessed using performance and memory benchmarks.

        Speaker: Jan Vogelsang (Forschungszentrum Jülich - PGI-15)
    • 10:25 10:35
      Group photo & short break Zoom

      Zoom

    • 10:35 12:05
      Workshop Zoom

      Zoom

      • 10:35
        Introduction Arbor --- Extending your Toolbox with Multi-Compartment Simulation 1h 30m

        We will present Arbor, a multicompartmental simulation library, that complements NEST, TVB, and nanoscale simulations, as well as offering interfaces to these tools. Arbor has been designed to leverage modern hardware, including GPUs, while delivering an intuitive interface to neuroscientists that is isolated from the concrete, low-level details. It has been shown to deliver performance up to the full scale of JUWELS booster.

        In this tutorial, we will show how to use Arbor starting from a simple ring network in NEST that is transformed, step by step, into a morphologically detailled model. Participants will be given ample chance to interact the models.

        Speaker: Han Lu (Forschungszentrum Juelich GmbH)
    • 12:05 13:00
      Lunch break 55m
    • 13:00 13:40
      Talks Zoom

      Zoom

      • 13:00
        Comparing data-driven architecture reconstructions of cortical microcircuits 20m

        Microcircuits are the building blocks of the neocortex [1]. Single instances have been reconstructed experimentally (e.g., [2]), and their general dynamics and information processing capabilities have been investigated theoretically (e.g., [3,4]). Their connectivity is usually represented in connectivity maps consisting of probabilities that neurons establish connections. These maps reduce the complicated circuitry to simple relations between cell types, allowing for efficient instantiations of neural network models in parallel computers [5]. While higher-order features like connectivity motifs are neglected, they enable the discovery of how the underlying structural principles of local circuits are linked to their dynamics.

        Recent years have seen significant advances in the application of electron microscopy (EM) for the reconstruction of local cortical networks through leveraging novel machine learning techniques ([6, 7]). These data allow for a more precise look into the architecture of local cortical circuits than was previously possible.

        Here, we construct a layer-resolved, population-based connectivity map from a $1\:\mathrm{mm}^{3}$ EM reconstruction of mouse visual cortex [6]. We compare the obtained microcircuit connectivity based on EM data with a corresponding representation derived from light microscopy (LM) data [2]. The connectivity maps exhibit qualitative differences, e.g., in termination patterns of inter-laminar projections. Additionally, we find that the length scale of connectivity is consistently overestimated when using morphology-based approaches compared to the actual connectivity available from EM data. Finally, we simulate spiking neural networks constrained by the derived microcircuit architectures with NEST [8], investigating the extent to which simulated spiking activity is consistent with experimentally observed neural firing.

        Speaker: Mr Anno Kurth (Institute for Advanced Simulation (IAS-6), Jülich Research Centre, RWTH Aachen University)
      • 13:20
        Simplified neuronal model capturing brain-state specific apical-amplification, -isolation and -drive induced by calcium dynamics 20m

        There is mounting experimental evidence that brain-state specific neural mechanisms supported by connectomic architectures serve to combine past and contextual knowledge with current, incoming flow of evidence (e.g. from sensory systems). Such mechanisms are distributed across multiple spatial and temporal scales and require dedicated support at the levels of individual neurons and synapses. A prominent feature in the neocortex is the structure of large, deep pyramidal neurons which show a peculiar separation between an apical dendritic compartment and a basal dentritic/peri-somatic compartment, with distinctive patterns of incoming connections and brain-state specific activation mechanisms, namely apical-amplification, -isolation and -drive associated to the wakefulness, deeper NREM sleep stages and REM sleep. The cognitive roles of apical mechanisms have been demonstrated in behaving animals. In contrast, classical models of learning spiking networks are based on single compartment neurons that miss the description of mechanisms combining apical and basal/somatic information. This work leverages the NEST multi-compartment modelling framework aiming to provide the NEST community with a simplified neuronal model (Ca-AdEx) that captures brain-state specific apical-amplification, -isolation and -drive through the integration of calcium dynamics in a distal compartment. The proposed neuronal model is essential for supporting brain-state specific features in NEST learning networks at minimal computational cost in the case of two-compartment Ca-AdEx usage. A machine learning algorithm, constrained by a set of fitness functions, selected the parameters defining neurons expressing the desired apical mechanisms. Furthermore, we identified a piece-wise linear transfer function (ThetaPlanes) to be used in large scale bio-inspired artificial intelligence systems.

        Speaker: Elena Pastorelli (INFN - Istituto Nazionale di Fisica Nucleare - sezione di Roma)
    • 13:40 14:25
      Keynote Zoom

      Zoom

      • 13:40
        The attractor dynamics of behavioral flexibility in spatial and reversal learning 45m

        To survive in a changing world, animals often need to suppress an obsolete behavior and acquire a new behavior. This process is known as reversal learning (RL). The neural mechanisms underlying RL in spatial navigation has received limited attention and it remains unclear what neural mechanisms maintain behavioral flexibility.
        We extended an existing closed-loop simulator of spatial navigation and learning, based on spiking neural networks (Ghazinouri et al. 2023). The activity of place cells and boundary cells were fed as inputs to 40 action selection neurons that each represents one direction of movement. The activity of these neurons drove the movement of the agent. When the agent reached the goal, behavior was reinforced with spike-timing-dependent plasticity (STDP) coupled with an eligibility trace which marks synaptic connections for future reward-based updates. The modeled RL task had an A-B-A design, where the goal was switched between two locations A and B every 10 trials.
        Agents using symmetric STDP excel initially on finding goal A, but fail to find goal B after the goal switch, persevering on goal A. Injecting short pulses noise to action neurons, using asymmetric STDP, and using small place field sizes was effective in driving spatial exploration in the absence of rewards, which ultimately led to finding goal B and, hence, reversal learning. However, this flexibility comes at the price of lower performance. Our work shows three examples of neural mechanisms that achieve flexibility at the behavioral level, whose differences can be understood in the terms of attractor dynamics.

        Speaker: Behnam Ghazinouri (Ruhr-University Bochum)
    • 14:25 14:40
      Poster teasers Zoom

      Zoom

      • 14:25
        Implementation and Validation of a Balanced Excitatory-Inhibitory Network in Loihi 3m

        We implement a balanced excitatory/inhibitory network (EI) in Intel’s neuromorphic hardware Loihi. A version of [1] has been used by researchers as a benchmark and validation for simulators in various software and hardware environments [2]. The implementation here has the same LIF neurons, but exponential decay synapses, and reduced size, which accommodate current software and hardware limitations of Loihi [3]. We implement the same network in NEST for validation.
        The Loihi implementation was designed as in [4], by scaling and shifting the LIF models and their parameters in order to fit within Loihi’s integer state arithmetic and limited-precision storage for parameters. For the network implementations here, we observe visually very similar firing patterns in both NEST and neuromorphic implementations. Network rate and correlation comparisons as in [5] yield similar numeric results, although not at the level of comparisons between standard simulations on varied architectures.
        The NEST implementation of the EI network runs approximately in real time, ~20s/s biological time, scaling weakly with network size [2]. The Loihi simulation performs much faster, as expected for a specialized hardware simulator. It exhibits a similar weak scaling, running about 500x faster than a standard CPU implementation, and 25-50 times faster than biological real time. Power consumption was also much lower, at approximately constant 2W/chip, with up to 6 chips used for these simulations.
        In conclusion, Loihi shows promise as an accelerator for biological neural network simulations. However more studies are needed to fully qualify its benefits and trade-offs of this platform.

        Speaker: Dr Alexander Dimitrov (Forschungszentrum Juelich)
      • 14:28
        Elucidating the spatial complexity of brain circuits with the NEST::multiscale toolchain 3m

        Synaptic transmission plays a crucial role in neuron-to-neuron communication, while internal voltage dynamics, driven by dendritic currents and the spatial arrangement of synapses, influence the responses of neurons within networks. The extent to which these dynamics affect computations at different spatial scales is not fully understood. Current simulation tools focus either on detailed neuron models (e.g. NEURON, Arbor) or on abstract large-scale network dynamics (e.g. NEST), leading to a divide where models such as those from the Blue Brain Project (BBP) include dendritic details in NEURON, but lack these specifics in abstract models in NEST.

        Addressing this gap requires a comprehensive approach, including detailed neuronal model databases, a systematic method for simplifying these models, and a simulation tool that supports both detailed and abstract models in network simulations. By integrating the BBP models into the NEural Analysis Toolkit (NEAT)[1], we enable simplification to varying degrees of coarse-grained descriptions. In addition, a compartmental modelling framework has been introduced within NEST using NESTML, which allows the simulation of both full and reduced models seamlessly, referred to as the NEST::multiscale toolchain.

        This toolchain has been used to develop a network model of layer 5 of the visual cortex, using recent connectomics data[2] to investigate the spatial resolution required for accurate modelling of brain circuits. The results reveal the appropriate level of spatial complexity required in neural models to reproduce the computational functions of cortical neurons, and provide insights into the design of minimal yet functionally representative neuron models for efficient and effective simulation.

        Speaker: Joshua Boettcher (Forschungszentrum Jülich)
      • 14:31
        Developing NEST GPU: from code optimization to validation 3m

        Simulating large regions of the mammalian brain at single-neuron spiking activity resolution poses significant challenges from both simulation software and hardware execution platform perspectives. In multi-GPU systems, a relevant aspect concerns the implementation of the software structures necessary for the organization of remote connections (i.e., between neurons allocated in different GPUs) and for the communication of spikes between the different GPUs. NEST GPU [1,2], the GPU component of the neural network simulator NEST [3], is tackling this challenge to make best use of present and upcoming supercomputers equipped with large numbers of powerful GPUs. Here, we extend our recent work of dynamically constructing networks directly in GPU memory [4] from one GPU to multiple GPUs in parallel, and we show performance results of these optimizations. To continuously test for correctness, we are setting up a validation pipeline to automatically compare the spiking activity of neuroscientifically relevant models such as the cortical microcircuit model [5] and the multi-area model of macaque vision-related cortex [6] with the respective CPU version as a reference. Furthermore, we give an update on our ongoing efforts in aligning the GPU and CPU components of NEST.

        Speaker: Luca Sergi (University of Cagliari)
      • 14:34
        Good practices for handling metadata in simulation workflows 3m

        Computer simulations are an essential pillar of knowledge generation in science.
        Understanding, reproducing, and exploring the results of simulations relies on tracking and organizing metadata describing numerical experiments.
        However, the models used to understand real-world systems, and the computational machinery required to simulate them, are typically complex, and produce large amounts of heterogeneous metadata.
        Here, we present general practices for acquiring and handling metadata that are agnostic to software and hardware, and highly flexible for the user.
        These consist of two steps: 1) recording and storing raw metadata, and 2) selecting and structuring metadata.
        As a proof of concept, we developed a Python tool to help with the second step, and use it to apply our practices to distinct high-performance computing use cases from hydrology and neuroscience.
        Our practices and the tool support sustainable numerical workflows, facilitating reproducibility and data reuse in generic simulation-based research.

        Speaker: Jose Villamar (Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany)
    • 14:40 14:55
      Contingency break 15m
    • 14:55 15:55
      Posters Gathertown

      Gathertown

      Conveners: Dr Alexander Dimitrov (Forschungszentrum Juelich), Joshua Boettcher (Forschungszentrum Jülich), Luca Sergi (University of Cagliari), Jose Villamar (INM-6, Forschungszentrum Jülich)
    • 15:55 16:10
      Wrap-up Gathertown

      Gathertown

      Convener: Abigail Morrison (INM-6 Forschungszentrum Jülich)
    • 16:10 17:00
      Mingle Gathertown

      Gathertown