Speaker
Description
Artificial neural networks (ANNs) have a long history of drawing inspiration from the brain, particularly the visual system. While the performance of ANNs on visual tasks continue to improve, even the most state-of-the-art visual architectures hit a brain similarity ceiling, making them unreliable tools for gathering insights into the brain. Our goal is to build a model of the visual system that breaks past this ceiling by incorporating brain-like connections, particularly top-down feedback.
To that end, we built a publicly available code base that converts a connectome file into a top-down recurrent model, where each anatomical node from the connectome is represented by a recurrent convolutional layer. The user can further customize the relative layer and receptive field size, the mechanism of top-down feedback, and the proportion of feedback and feedforward inputs for each layer to better match the brain area they wish to model.
We used our tool to compare multiple hypotheses on the mechanism of top-down feedback with more realistic connectomes. We found that all proposed mechanisms of top-down modulation can resolve ambiguous sensory input using auditory clues. However, simultaneous threshold and gain modulating feedback helped models perform better on more difficult tasks such as occluded image recognition and tasks that require completely ignoring sensory input. The increased performance of the dual threshold and gain-modulating mechanism is particularly interesting considering previous research which suggests the dual mechanism leads to more brain-like firing patterns for an artificial neuron.
We hope that our code base will enable users to easily compare different biological and non-biological connection schemes in silico, yielding similar insights on the effectiveness of different computations and connectivity schemes on functionality. Our demonstration of the utility of these top-down feedback architectures shows that this could lead to ANNs with more human-like capabilities.