Speaker
Description
In computational neuroscience, systematic performance monitoring of simulation code is challenging due to continuous technological advancements and an ever evolving zoo of neural systems models. Albers et al.[1] described generic principles for efficient benchmarking workflows and developed the open-source framework BeNNch streamlining the process for neural simulators, such as NEST[2]. Here, we extend this work by integrating the framework into a continuous benchmarking workflow. We present solutions to automatically construct all necessary scripts, execute all runs and centrally aggregate all results. New user-defined configurations are derived from prior studies, facilitating reproducibility and reducing error-proneness. Comparability across platforms and versions of code is achieved through a unified methodology for recording benchmark data and metadata. Our approach enables continuous benchmarking of simulation tools and thus early detection of performance degradation. Using NEST as example code, we demonstrate the potential of our continuous benchmarking workflow for advancing simulation technology for brain research.
References
[1] Albers et al. (2022) A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations. Front. Neuroinform. 16:837549. doi: 10.3389/fninf.2022.837549
[2] Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430
Preferred form of presentation | Poster & advertising flash talk |
---|---|
Topic area | Simulator technology and performance |
Keywords | Simulator performance, Benchmarking, Continuous Integration, Open-Source |
Speaker time zone | UTC+2 |
I agree to the copyright and license terms | Yes |
I agree to the declaration of honor | Yes |