Aalto University's laboratory for Computational Modeling of Human Cognition
Mission
It is our mission to understand the complex cognitive processes of the brain through computational models.
When it comes to the brain, a lot of effort has gone into measuring and visualizing the brain activity produced by volunteers in the scanner. However, we believe that when it comes to understanding the brain, data alone is not enough. If brain activity is an indication of the brain at work, what is the work that the brain is actually doing? What computational processes are we observing? To answer this, we are building computational models that mimic the high-level processes in the brain closely enough, that we can compare the model to the brain activity we observe.
Aalto University is famous for its pioneering work in brain imaging methods development. It is one of the birthplaces of MEG and is now performing ground breaking work in optically pumped magnetometer (OPM) sensor technology to drive the possibilities even further. This lab aims to provide a strong computational modeling counterpart to the imaging work.
Lab Members
Shristi Baral
PhD Candidate
Creating semantic representations through integration of multiple modality-specific processing streams.
Jiaxin You
PhD Candidate
Resolution of misspelled words through top-down connections. Also part of the Imaging Language group.
Marijn van Vliet
Academy Research Fellow
Constructing computational models of visual word recognition.
Research Projects
Convolutional networks can model the functional modulation of MEG responses during reading
Marijn van Vliet, Oona Rinkinen, Takao Shimizu, Anni-Mari Niskanen, Barry Devereux, Riitta Salmelin
To better understand the computational steps that the brain performs during reading, we used a convolutional neural network as a computational model of visual word recognition, the first stage of reading. In contrast to traditional models of reading, our model directly operates on the pixel values of an image containing text, and has a large vocabulary of 10k Finnish words. The same stimuli can thus be presented unmodified to both the model and human volunteers in an MEG scanner. In a direct comparison between model and brain activity, we show that the model accurately predicts the amplitude of three evoked MEG response components commonly observed during reading. We conclude that the deep learning techniques that revolutionized models of object recognition can also create models of reading that can be straightforwardly compared to neuroimaging data, which will greatly facilitate testing and refining theories on language processing in the brain.
Models
The CMHC lab maintains a set of models that you can interact with through your browser.
Word2Vec Guessing Game
Think of a target word and give "clue" words below that describe the target. The word2vec model will give the 10 words closest to the semantic mean of the given clues. See if you can make the computer guess your chosen target word.
Word2Vec Semantic Projection
Describe a dimension by thinking of words that create a contrast. Make sure to enter them "pairwise", meaning that the first negative word is the antonym of the first positive word. Finally, enter some words to be compared along the dimension.
Software
MNE-RSA
Representational similarity analysis for MNE-Python
A plugin for MNE-Python to perform representational similarity analysis (RSA) on EEG & MEG data in a searchlight fashion. It includes best practice features such as cross-validation and PCA preprocessing.
Pytorch-HMAX
The HMAX model of vision implemented in PyTorch
A PyTorch implementation of the HMAX model that closely follows that of the MATLAB implementation of The Laboratory for Computational Cognitive Neuroscience of Georgetown University.