SNG Conference - Dr Ian Charest
ven. 12 févr.
|Zoom plateform
Semantic representations of natural scenes in the human brain
Heure et lieu
12 févr. 2021, 14:00 UTC−5
Zoom plateform
À propos de l'événement
Dr Ian Charest is a cognitive computational neuroscientist generally interested in high-level vision and audition. He leads the Charest Lab at the University of Birmingham and the Centre for Human Brain Health, where he and his team investigate object recognition in the brain using neuroimaging techniques such as magneto-electroencephalography (M-EEG), and functional magnetic resonance imaging (fMRI). His work makes use of advanced computational modelling and analysis techniques, including machine learning, representational similarity analysis (RSA), and artificial neural networks (ANNs), to better understand human brain function. The current topics of research in the lab include information processing in the brain during perception, memory, and visual consciousness when recognising and interpreting natural scenes and visual objects. The laboratory is currently funded by an European Research Council - Starting Grant (ERC-StG 759432) to investigate object recognition, visual consciousness, with a focus on individual differences in brain and behaviour.
Conference summary:
Inferring the semantic content of sensory inputs is a fundamental computational challenge faced by the human brain. While prior research provides evidence for the existence of semantic representations, progress has generally been hampered by limited experimental data and lack of computational theory. In this talk, I will describe recent advances, where we combine modern machine learning methods with massive neuroimaging datasets to better understand visual semantics, consciousness, and individual differences in brain function. In a series of experiments, we observed semantic representations distributed across a vast network of brain regions, involved in visual recognition and visual consciousness. Our results provide a broad view of how the visual system transforms sensory inputs into high-level representations most relevant for cognition and behavior. Furthermore, they suggest that the visual system does not simply seek to compute object category labels, but instead might be driven to learn feature representations that support rich semantic interpretations of ongoing events in the world.