Professor of Psychology and Neural Science
Department of Psychology: Program in Cognition & Perception
Center for Neural Science
Center for Brain Imaging
Center for Neuroeconomics

Click here for information on courses I'm teaching

My interview for NPR's Science Friday, October, 2012

Research | Biography | Publications | Address


How does vision determine the size, shape and boundaries of objects in our environment? Research in my laboratory centers on various aspects of visual perception and the visual control of action. Recent research on the visual system has grown into an exciting collaboration among psychologists, physiologists, computer scientists, and mathematicians. My research continues to blur the lines between these fields in two ways. First, traditional psychophysical methods are enhanced using advanced computer graphics and image processing techniques for stimulus generation and analysis. Second, both mathematical methods and computer simulations are used to model the psychophysical results. As much as possible, the simulation models attempt to reflect a feasible physiological implementation, as I have a strong interest in neural network models of vision. Next, I describe each broad area of my research in turn.

Texture-defined edge based on a difference in texture element orientation.

Texture and pattern coding. Sometimes when one texture pattern is placed on a background of another, the two segregate quickly and seemingly effortlessly into foreground and background. Other times not. Why is this? What sequence of linear and nonlinear image transformations leads to this variation in texture segregation performance? Our research in this area consists of both psychophysical experiments and computational modeling to determine the details of the visual machinery used to code, interpret and segregate texture patterns. More recently, we have also looked at the identification of shapes defined by texture (e.g., letters) and the estimation of texture properties (e.g., surface roughness) in 3-d rendered scenes. We have examined the cortical coding of 2nd-order patterns by looking for orientation-selective adaptation of cortical responses to 1st- and 2nd-order patterns using functional magnetic resonance imaging (fMRI) in collaboration with David Heeger (NYU). We have are also exploring a new model we have developed to account for cortical pattern adaptation, which has implications for computational theory, physiology and perception.

This is an animation of our "overt-criterion" task. Observers discriminate between two noisy categories of ellipses differing in their mean orientation. On each trial, observers rotate a line to indicate their decision criterion, and are scored on a subsequently presented ellipse based on whether it fell on the correct side (clockwise or counterclockwise) of the line's orientation based on the ellipse's category membership.

Perceptual decision-making. I study how observers make perceptual decisions under uncertainty. Sensory signals are noisy and an ideal observer will combine such signals with knowledge of their uncertainy, prior expectations and knowledge of potential outcome- and decision-continent rewards to guide decisions. We ask whether humans act as ideal decision-makers and, if not, where are compromises made or heuristics used. We have shown that orientation estimation appears to be consistent with the ideal-observer model and that humans use a prior distribution of orientations that matches environmental statistics. We have studied how the decision criterion for a perceptual discrimination is placed as a function of rewards, prior probabilities and changing conditions. We have developed a new model of how sensory evidence is accumulated over time, which has implications for modeling reaction-time and cued-response tasks.

This is a frame from a stimulus movie of a pair of rotating cylinders. The images include the depth cues of binocular disparity (stereo), motion, texture perspective and occluding contour.

Sensory cue integration. I have worked extensively on the issue of how the visual system combines information from multiple sources or cues. This research has been continuing for a number of years in collaboration with Larry Maloney (NYU), Marty Banks (Berkeley), Wendy Adams, as well as several graduate students and postdoctoral research associates. This work begins by considering what an ideal decision maker would do in such a situation. Human performance in cue-combination tasks is compared to models based on statistical decision theory. In many cases, Bayesian models are used in which it is understood that information sources (visual cues or prior knowledge) are uncertain, and should be combined with reference to the form and amount of uncertainty in each. Our studies have looked at cue combination in the perception of depth from multiple depth cues (binocular disparity, motion, texture, shading, contour, etc.), depth cue disambiguation (in stimuli with contour and shading cues), depth cue scaling (with multiple cues to the viewing geometry, including the possible interaction of the motion and stereo cues to compute the viewing distance) and edge localization (with cues of texture scale, orientation and contrast). More recent work considers multisensory cue integration including vision, audition, proprioception and touch.

Julia in the Optotrak
This is one of our subjects (Julia Trommershäuser) set up for the collection of movement data while performing a pointing task. Infra-red emitting diodes are strapped to her finger and arm. Their motion path is tracked in 3D by the three infrared-sensitive cameras of an Optotrak system, visible in the background.

Visual control of action. We have also applied statistical decision theory to modeling visuo-motor control. In this research, subjects perform pointing or other tasks under tight time constraints. Subjects earn points (and eventually, money) for fast, accurate performance of the task (pointing at a target region), but lose points if they respond late or point towards penalty regions. By measuring outcome uncertainty (the variance in motor outcome), we can compute the optimal aim point for any configuration of payoff and penalty regions and values. In a variety of situations, subjects are optimal or near-optimal in this task. That is, they earn as many points as would have been earned by an ideal movement planner having the same movement variability as the subject. Subjects appear to have available an estimate of their movement variability and take it into account in movement planning, even in situations in which that variability has been increased (artificially) by the experimenter. More recently we have studied movement planning for reaches and saccadic eye movement using both learning and adaptation experiments to study the coordinate systems in which movements are planned.

This is an animation of a 3-d shape rocking back and forth, thus cued by relative motion. The dots that carry the motion flicker occasionally so as to eliminate the possible cue of changing local dot density. Despite the elimination of that cue and the flicker, it is relatively easy to perceive (and to judge) the 3-d shape.

Depth perception. I am interested in the details of how the visual system determines depth and object shape using a variety of visual cues. I have done computational and psychophysical studies concerning several such cues including the kinetic depth effect, binocular stereopsis, and shape from texture, contour and shading. Binocular stereopsis is particularly interesting as the raw information (the disparities in the positions of features in the images from the two eyes) must be scaled based on estimates of the gaze distance (vergence) and direction (version), and these can, in turn, be estimated using both retinal cues (the pattern of vertical disparities) and extra-retinal cues (knowledge of the eyes' positions).

back to the top


I have an enduring interest in the use of computational techniques to study human vision. My doctoral dissertation concerned the computer simulation of a neural network model of visual learning. For this work, I received the Ph.D. from the Department of Computer and Communication Sciences of the University of Michigan in 1981, having worked primarily with John Holland. I then moved to New York University and worked as a postdoctoral research associate with George Sperling, examining aspects of low bandwidth visual image sequences, in particular as applied to low bandwidth communication systems for the deaf (involving perceptual studies of American Sign Language). During that time I also co-wrote the HIPS image processing software. In 1984 I joined the faculty at NYU, and have continued to work on problems in visual perception, concentrating on perception of depth and texture. In 1992-3, I spent a sabbatical year as a National Research Council Senior Research Associate at NASA Ames Research Center. In the summer of 1998, I visited the Institut d'Ingénierie de la Vision, Université Jean Monnet de Saint-Étienne, collaborating on work on texture appearance. In 1999-2002, I spent a sabbatical year and much of the subsequent two years at the School of Optometry, University of California at Berkeley, working with Prof. Martin S. Banks on various projects in depth perception and stereopsis, visiting again 2015-2016.

back to the top

Selected Publications

Click here for a full list of publications

Click here for a full CV

Norton, E. H., Fleming, S. M., Daw, N. D. & Landy, M. S. (2017). Suboptimal criterion learning in static and dynamic environments. PLoS Computational Biology, 13(1):e1005304. doi:10.1371/journal.pcbi.1005304

Sun, P. & Landy, M. S. (2016). A two-stage process model of sensory discrimination: An alternative to drift diffusion. Journal of Neuroscience, 36, 11259-11274.

Westrick, Z. M., Heeger, D. J. & Landy, M. S. (2016). Pattern adaptation and normalization reweighting. Journal of Neuroscience, 36, 9805-9816.

Hudson, T. E. & Landy, M. S. (2016). Sinewave-perturbed errors reveal multiple coordinate systems for sensory-motor adaptation. Vision Research, 119, 82-98.

Ackermann, J. F. & Landy, M. S. (2015). Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards. Attention, Perception & Psychophysics, 77, 638-658.

Saarela, T. & Landy, M. S. (2015). Integration of feature dimensions but failure of attentional selection in object recognition. Current Biology, 25 920-927.

Westrick, Z. M., Henry, C. A. & Landy, M. S. (2013). Inconsistent channel bandwidth estimates suggest winner-take-all nonlinearity in second-order vision. Vision Research, 81, 58-68.

Landy, M. S., Trommershäuser, J. & Daw, N. D. (2012). Dynamic estimation of task-relevant variance in movement under risk. Journal of Neuroscience, 32, 12702-12711.

Wolpert, D. M. & Landy, M. S. (2012). Motor control is decision-making, Current Opinion in Neurobiology, 22 996-1003.

Girshick, A. R., Landy, M. S. & Simoncelli, E. P. (2011). Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics. Nature Neuroscience, 14, 926-932.

Oruç, I. & Landy, M. S. (2009). Scale dependence and channel switching in letter identification. Journal of Vision, 9(9):4, 1-19.

Trommershäuser, J., Maloney, L. T. & Landy, M. S. (2009). The expected utility of movement. In Glimcher, P. W., Camerer, C. F., Fehr, E. & Poldrack, R. A. (Eds.), Neuroeconomics: Decision Making and the Brain (pp. 95-111). New York: Academic Press.

Ho, Y.-X., Landy, M. S. & Maloney, L. T. (2008). Conjoint measurement of gloss and surface texture. Psychological Science, 19, 196-204.

Trommershäuser, J., Maloney, L. T. & Landy, M. S. (2008). Decision making, movement planning, and statistical decision theory. Trends in Cognitive Sciences, 12, 291-297.

Landy, M. S., Goutcher, R., Trommershäuser, J. & Mamassian, P. (2007). Visual estimation under risk. Journal of Vision, 7(6):4, 1-15.

Ho, Y.-X., Landy, M. S. & Maloney, L. T. (2006). How direction of illumination affects visually perceived surface roughness. Journal of Vision, 6, 634-648.

Larsson, J., Landy, M. S. & Heeger, D. J. (2006). Orientation-selective adaptation to first- and second-order patterns in human visual cortex. Journal of Neurophysiology, 95, 862-881.

Oruç, I., Landy, M. S. & Pelli, D. G. (2006). Noise masking reveals channels for second-order letters. Vision Research, 46, 1493-1506.

Trommershäuser, J., Gepshtein, S., Maloney, L. T., Landy, M. S. & Banks, M. S. (2005). Optimal compensation for changes in task-relevant movement variability. Journal of Neuroscience, 25, 7169-7178.

Banks, M. S., Gepshtein, S. & Landy, M. S. (2004). Why is stereoresolution so low? Journal of Neuroscience, 24, 2077-2089.

Landy, M. S. & Graham, N. (2004). Visual perception of texture. In Chalupa, L. M. & Werner, J. S. (Eds.), The Visual Neurosciences (pp. 1106-1118). Cambridge, MA: MIT Press.

Oruç, I., Maloney, L. T. & Landy, M. S. (2003). Weighted linear cue combination with possibly correlated error. Vision Research, 43, 2451-2468.

Trommershäuser, J., Maloney, L. T. & Landy, M. S. (2003). Statistical decision theory and the selection of rapid, goal-directed movements. Journal of the Optical Society of America A, 20, 1419-1433.

Hillis, J. M., Ernst, M. O., Banks, M. S. & Landy, M. S. (2002). Combining sensory information: mandatory fusion within, but not between, senses. Science, 298, 1627-1630.

Landy, M. S. & Oruç, I. (2002). Properties of 2nd-order spatial frequency channels. Vision Research, 42, 2311-2329.

Landy, M. S. & Kojima, H. (2001). Ideal cue combination for localizing texture-defined edges. Journal of the Optical Society of America A, 18, 2307-2320.

Mamassian, P. & Landy, M. S. (2001). Interaction of visual prior constraints. Vision Research, 41, 2653-2688.

Brenner, E. & Landy, M. S. (1999). Interaction between the perceived shape of two objects. Vision Research, 39, 3834-3848.

Mamassian, P., and Landy, M. S. (1998). Observer biases in the 3D interpretation of line drawings. Vision Research 38, 2817-2832.

Wolfson, S. S., and Landy, M. S. (1995). Discrimination of orientation-defined texture edges. Vision Research 35, 2863-2877.

Landy, M. S., Maloney, L. T., and Pavel, M., eds. (1995). Exploratory Vision: The Active Eye. New York: Springer-Verlag.

Landy, M. S., Maloney, L. T., Johnston, E. B., and Young, M. J. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research 35, 389-412.

Chubb, C., Econopouly, J., and Landy, M. S. (1994). Histogram contrast analysis and the visual segregation of IID textures. Journal of the Optical Society of America A 11, 2350-2374.

Johnston, E. B., Cumming, B. G., and Landy, M. S. (1994). Integration of stereopsis and motion shape cues. Vision Research 34, 2259-2275.

Young, M. J., Landy, M. S., and Maloney, L. T. (1993). A perturbation analysis of depth perception from combinations of texture and motion cues. Vision Research 33, 2685-2696.

Landy, M. S., Dosher, B. A., Sperling, G., and Perkins, M. E. (1991). The kinetic depth effect and optic flow II. Fourier and non-Fourier motion. Vision Research 31, 859-876.

Landy, M. S., and Bergen, J. R. (1991). Texture segregation and orientation gradient. Vision Research 31, 679-691.

Landy, M. S., and Movshon, J. A., eds. (1991). Computational Models of Visual Processing. Cambridge, MA: MIT Press.

Landy, M. S., Cohen, Y., and Sperling, G. (1984). HIPS: Image processing under UNIX. Software and applications. Behavior Research Methods, Instrumentation, and Computers 16, 199-216.

back to the top


Michael S. Landy
Professor of Psychology and Neural Science

Department of Psychology
New York University
6 Washington Place, Room 961
New York, NY 10003
fax: (212)-995-4349

back to the top