Topic 1: Dimensional reduction in neural data analysis
A common feature of cortical recordings is the great heterogeneity of response properties, and variability in responses, across neurons. With the advent of large-scale multi-electrode recordings, an increasing number of labs face the question of how to address this variety of responses, both in terms of describing the data and in terms of interpreting its functional significance. Might the heterogeneous responses reflect different noisy views of a conserved low-dimensional computational structure? If so, how can that structure be identified, validated, and interpreted? We will study a number of advanced factor analysis techniques for exploring and discovering structures in high dimensional neural data, as well as machine learning techniques for manifold learning.
Topic 2: Scene statistics and cortical representation
Bayesian inference has long been proposed as a fundamental computational principle in the brain. The earliest idea can be traced back to Helmholtz. Central to understanding the neural basis of Bayesian perceptual inference is understanding how the statistical regularities in natural scenes are encoded in cortical representation to serve as priors in the inference process. Natural images however are enormously complex and maybe best expressed in hierarchical forms. Thus, a major challenge in computational vision is to understand the basic vocabulary of images, and the computational rules with which elementary components can be composed to form successive compositional structures to encode the hierarchical priors of natural scenes. We will explore statistical models of images, as well as compositional models such as DBN (Deep belief net) and RCM (Recursive compositional models) for learning the hierarchical language of vision. We will explore various state-of-the-art techniques (MRF, GLM) to analyze how these hierarchical scene priors are encoded in neural tunings and neural connectivities.
Topic 3: Probabilistic inference, perception and prediction
While perception has been popularly formulated in terms of Bayesian inference in the theoretical level, little is known about the computational algorithms and implementation of perceptual inference. We will study a number of algorithms that have been effective in computer vision for learning and inference, particularly in the context of hierarchy of concepts and predictive models in time. Most of these algorithms such as particle-filtering, sampling and mean-field approximation are probabilistic in nature. Thus, we will explore the potential neural codes for representing probabilistic distributions, and examine the temporal dynamics of these codes with a view to understanding the neural algorithms for probabilistic perceptual inference. We will investigate how such framework can be exploited to account various neural phenomena related to attention, feedback, and predictive remapping. If time permits, we will experiment with neural simulation packages such as Nengo for simulating realistic neural circuits.
Topic 4: Neural decoding, neural stimulation and visual prosthesis
With an understanding of cortical representation and neural mechanisms for perceptual inference, we can begin to explore how neural decoding and neural simulation technology can be coupled with large-scale multi-electrode array to generate perceptual representation in the brain by electrical stimulation. There are over 40 million blind individuals in the world. A variety of invasive and noninvasive procedures have emerged over the years to use electrical stimulation to "restore" or create vision, ranging from retinal implant to electrical stimulation in LGN and stimulation of the visual cortex. We will investigate how V1 and the extrastriate cortex can represent mental images and precepts individually and together, both in terms of theories, models and neural evidence. We will study literature of artificial vision in human and animal models and develop proposals and paradigms for the development of visual prosthesis through electrical stimulation.
Topic 5: Vision, music, emotion and higher cognition
Vision is deeply connected to emotion and higher order cognition. We exploit our visual system to reason about abstract and complex concepts. For example, we utilize curves and graphs to reason and make decisions. Visual images can evoke emotion and sound and music. We will explore the use of multi-channel EEG recording and neural decoding techniques to study the connections between vision and music; between vision and emotion; between vision and conceptual reasoning; between vision and psychiatric disorders.
Instructors | Office (Office hours) | Email (Phone) |
---|---|---|
Prof. Tai Sing Lee | Mellon Inst. Rm 115 | tai@cnbc.cmu.edu (412-268-1060) |
Evaluation | % of Grade |
---|---|
Term Project and Term Paper | 50 |
Class Discussion/Presentation | 50 |