MORE THAN THE SUM OF ITS PARTS
Norma Kühn's PhD work in Göttingen sheds light on synergistic
motion decoding. Her findings were published in Neuron today.
As we move through our environment, images are projected onto our retinas. Retinal neurons extract relevant features to interpret the visual scene, for instance, “direction-selective” neurons signal certain directions of image motion. This is important when stabilizing our gaze as we move our head or eyes, for example. But besides motion direction, these cells can also detect changes in brightness, which may complicate readouts of motion direction. In her PhD work, Norma Kühn set out to unravel how feature representation occurs for complex visual scenes. The results have been published in today's edition of Neuron.
Many congratulations, Norma! Can you tell us what makes these new findings so interesting?
We often think of early visual processing as a situation where each retinal cell type transmits information about a distinct feature to downstream neurons. While this might be true for simplified laboratory stimuli, retinal cells usually multiplex more than one visual feature when stimuli are more complex.
In this work, we investigated how individual features can be extracted from neurons signaling information about several features simultaneously. While discerning different features from the signals of a single neuron is hardly possible, we found that correlation patterns between groups of neurons help to disambiguate these features. These correlations provide additional information that is not present in the individual responses, leading to a synergistic motion readout.
When did you first realize you were on to something really interesting?
The ability to extract motion direction from the signals of direction-selective cells is traditionally probed with uniformly drifting gratings. At first, we were just curious whether it is possible to reconstruct more complex motion patterns from the signals of these neurons.
This is when we found that the combined signals from several of these cells provided more information than by summing the information from each cell individually, often referred to as synergy. This was not expected at all and led us to investigate the mechanisms behind this phenomenon.
While the results appeared in this week's edition of Neuron, the work was done during your time as a PhD student in Göttingen. How did it influence your current research as a postdoc in the Farrow lab?
Our knowledge of how visual features extracted by the retina are used in the brain, is still limited. Except for some dedicated pathways that for instance drive the pupillary reflex, we barely know how signals from groups of retinal cells are combined to recognize complex visual patterns and drive behavior.
In the Farrow lab, we study how different sets of visual inputs are routed in the superior colliculus to drive specific innate behaviors. Here, I have the great opportunity to test whether the discovered rules by which correlation patterns enhance feature readout apply in central neurons. This might shed new light onto how central neurons integrate their sensory inputs to extract relevant features and drive behavior.