Speaker
Description
Many recent advances in artificial neural networks have been inspired by computations in the human brain. Even so, processes in modern neural networks often do not resemble the biological process it derives inspiration from. Differences from the human visual system, for instance, result in differing representations at higher levels and differing behaviors, such as inability to deal with occluded, blurred and otherwise challenging stimuli that humans can identify based on context.
We hypothesize these differences are due to the omission of top-down feedback connections which are prevalent in the brain. Using human brain connectivity and cortical data, we aim to build a model of the visual system that uses top-down feedback. BigBrain data will be used to determine the hierarchical level of each area, specifically properties such as cortical thickness used to approximate sensory-associational gradient. Preliminary results reveal that modulatory top-down feedback connections carrying contextual information or an auditory clue help different networks identify ambiguous images.
By incorporating human brain-based inductive biases and modulatory top-down feedback, we hope to build a model of the visual system that more closely resembles the representations and capabilities of the human brain. The model aims to be effective both for tasks that require human-like decision making and studies investigating the role of various connections in the visual system.