Speaker
Description
Continual learning at the edge is an aspirational goal of AI technologies. Neuromorphic hardware that implements large Spiking Neural Networks (SNN) is particularly attractive in this regard, thanks to its inherently local computational paradigm and its potential compatibility with future and emerging computing devices.
This talk will first overview of current methods for learning in SNN using gradient-based methods, which can achieve competitive accuracy and performance compared to Deep Neural Networks (DNNs). The resulting learning algorithms can be implemented as local synaptic plasticity rules. However, similar to DNNs, these are based on data-intensive and iterative training processes that are incompatible with the realities of neuromorphic hardware. I will argue that gradient-based meta-learning (learning-to-learn) can play a key role in closing this gap by enabling accurate and fast learning that is robust to hardware non-idealities. These results bring neuromorphic engineering several steps closer to building intelligent agents that can continuously adapt to their environment in real time.
Preferred form of presentation | Talk (& optional poster) |
---|---|
Topic area | models and applications |
Keywords | Neuromorphic;SNN;Hardware;Learning |
Speaker time zone | UTC+2 |
I agree to the copyright and license terms | Yes |
I agree to the declaration of honor | Yes |