Our research focuses on two levels of computation. Models of high-level computation aim at understanding the representation that humans use to learn about their environment. The way information is represented constrains both the ways new information can be acquired and how learned information can be exploited for achieving various (e.g. behavioral) goals. Since learning has to be performed on high-dimensional, noisy and ambiguous stimuli, probabilistic models are adequate tools as these models can handle all of these issues. Furthermore, Bayesian probabilistic models provide a normative theory for learning, which enables us to compare model performance with human data. We test theories by analyzing behavior of humans in experiments: by following participants’ eye movement we analyze how learning affects the design of efficient movement strategies.
Our investigations in low-level computations address how neurons deal with the problems imposed by the extremely rich stimuli. Optimal inference and learning requires that neurons also represent the uncertainty related to the inferred features of the environment besides the actual values of the features. The focus is on how a proper representation can be built and how these principles affect neural responses. Probabilistic models are used to model evoked and spontaneous activities in the visual system.
Research highlights
- Cognitive tomography
Mind reading from discrete choices: using button presses of humans to infer task-independent internal representations of stimuli - Darkness sheds light on neural activity
Revealing why there is a perplexing similarity in sensory cortex when neural activity is recorded during stimulus presentation and when there is no stimulus at all - Sources of variability
- Learning the bricks of vision
Characterizing mathematical principles underlying the representations learned about complex visual stimuli - Linking response variability to perceptual inference
An approximate inference method, stochastic sampling predicts systematic changes in the ‘noisiness’ of neural responses - Representational untangling in V1
Contribution of firing rate nonlinearity to the construction of linearly decodable codes - Planning in hippocampus
Sampling as a substrate to perform probabilistic computations to navigate the cognitive map - Multiplexing in the visual cortex
How variables relevant for task execution, but not corresponding to direct sensory input, are represented in V1