Welcome     |     Lab Members     |     Publications     |     Research     |     In the News     |     Join Us     |     Links     |     MouseTracker
  

Broadly, researchers in the lab investigate the cognitive and neural processes underlying split-second social perception. We take an integrative and multi-method approach, combining behavioral paradigms (e.g., mouse-tracking and others), brain-imaging (fMRI), electrophysiology (EEG/ERP), and computational modeling. This allows us to address our research questions at multiple levels of analysis. Although our work is always traveling in new directions, the following are several topics we pursue.

For a recent review of some of our research, see Freeman and Johnson (2016) in Trends in Cognitive Sciences.

 

Social Categorization: Carving Up the Social World

Decades of research have documented the consequences of placing another person into a social category. Once a person is categorized by gender, race, or age, for example, associated stereotypes and evaluative biases often come into play, in spite of one's better intentions. But how do we make these categorizations in the first place? Our work suggests that social categorization is a highly dynamic and malleable process; it is not simply a discrete, straightforward 'read out' of facial features. None of us is the perfect prototype of a single category, and each of our faces inhabit graded points along any number of social continua (e.g., from 100% masculine to 100% feminine). This is especially clear in the case of racially-ambiguous faces, such as biracial or multiracial individuals, but is hardly limited to those circumstances. How do we take this natural diversity and the inherently continuous spectra out in the social world and form single categorizations, and how do we deal with temporary ambiguities? How does the brain represent social categories, and how is their perception shaped by facial cues, social-conceptual knowledge (e.g., stereotypes), or an individual's experiences with other social groups? How do processes underlying social categorization impact the ways we evaluate and interact with other people? To answer these questions, we use neural measures as well as mouse-tracking, which uncovers the dynamic processes that underlie categorizations. This is done using participants' real-time hand movements en route to responses on the screen. We can exploit these hand movements to reveal when during face perception associated stereotypes become activated (e.g., male → aggressive), when certain facial or contextual cues shape the process of perception, and how top-down social factors (e.g., stereotypes) impact perceptions. At the neural level, we are especially interested in the interplay of lower-level perceptual brain regions involved in face processing and higher-order prefrontal regions, which have access to social category knowledge and stereotypes, in forming categorizations. We are also interested in the multi-level neural representations that may underlie these categorizations. For example, when encountering a racially-ambiguous individual, do some brain regions possess a monoracial representation (e.g., White or Black), while other brain regions possess a multiracial representation (e.g., mixed-race), and how does this change with exposure to racial ambiguity?  How does the brain structure social categories? Is the brain's coding of a social category inherently linked to the social-conceptual knowledge (i.e., stereotypes) or perceptual cues (i.e., facial features) associated with it? How do these social categorization processes shape downstream social interaction? We have recently begun using computational and neural decoding approaches, such as multi-voxel pattern analyses (MVPA), to address these questions (e.g., Stolier & Freeman, 2016). Together, researchers in the lab use multiple behavioral and neural methods to understand how we carve up our social worlds. We believe that a rigorous approach assessing the mechanisms, processes, and representations underlying social categorization will yield important insights into how we can reduce its negative downstream consequences (e.g., gender or racial bias and discrimination). We also believe it will yield important insights into improving intergroup interactions.


Visual Bias: Stereotypes and Face Perception

It's clear that our stereotypes about groups or expectations about another person can influence how we evaluate or interact with them, often in unintended ways. But can stereotypes or expectations literally change the way we "see" another person? Researchers in the lab investigate how and to what extent such high-level social processes shape the visual perception of another's face. We often use fMRI as well as mouse-tracking paradigms to assess the perceptual impact of stereotypes, expectations, or context. Mouse-tracking can be especially useful, as it is highly sensitive to partial, tentative shifts in perception, and in most cases top-down factors are not strong enough to induce wholesale perceptual changes. For example, low-status attire stereotypically associated with Black individuals in the U.S. may lead a face to be temporarily interpreted as Black for a few hundred milliseconds before it is ultimately categorized as White (see figure; Freeman et al., 2011). Similarly, expectations due to the context in which a person is encountered (e.g., a Chinese context) can lead a face to be temporarily interpreted one way (e.g., Asian) before settling onto another way (e.g., White) (Freeman et al., 2013). At the neural level, we investigate the brain mechanisms that drive such flexible impact of social processes (stereotypes, expectations, or social context) on the visual perception of faces. One mechanism we believe is quite important is the orbitofrontal cortex (OFC) (Stolier & Freeman, 2016; Freeman et al., 2015; Freeman et al., 2010). We are also interested in the roles of automaticity and control in such visual biases, as well as the downstream consequences that result from them. If bias exists at a visual level and changes the way we visually "see" another person, this could lead to self-fulfilling prophecies and exacerbate already-existing behavioral biases. More broadly, these questions inform our understanding of the interplay between social cognition and visual perception.

 

Faces and Trait Attribution

We're often told not to judge a book by its cover. Nevertheless, we routinely use facial cues to infer an individual's personality traits. This can occur quite rapidly. In one series of studies, the trustworthiness of subliminally presented faces sensitively modulated activation in the amygdala despite the faces being subjectively invisible (Freeman et al., 2014). However, although spontaneous, the amygdala's role in trait inference is highly sensitive to context and prior social knowledge (e.g., Freeman et al., 2010). Prior research has advanced several prominent models of trait attribution, centering on two primary dimensions of warmth/trustworthiness/intention and competence/dominance/ability. Researchers in the lab are exploring several questions related to trait attribution, including how the structure of fundamental trait dimensions intersects with various social group memberships (e.g., gender, race, age) and changes across social context. For example, the facial width-to-height ratio is a cue strongly related to perceptions of dominance in younger adults, but in older adults in takes on new meaning and bleeds over into perceptions of wisdom (Hehman et al., 2014). We are also interested in how the basic visual perception of facial cues related to a given trait is malleable to top-down influences of social cognition, such that stereotypes, expectations, group processes, or motivations could change the way we visually construe a face's personality. Finally, we are exploring issues of consistency and accuracy in face-based trait attributions.

Real-World Consequences of Split-Second Perceptions
 

We are exploring the link between real-time categorization dynamics and real-world social dynamics. Ultimately, the aim of this work is to bridge multiple time scales of social behavior---from millisecond-level dynamics to important downstream social outcomes. We are investigating how categorization processes occurring in the first few hundred milliseconds of face perception may be predictive of real-world behavior. Our recent implementation of mouse-tracking on the Internet is affording us a temporally sensitive methodology to link with data across the country and world. In several studies, we are examining how perceiver characteristics that vary with geographic regions may interact with perceptual cues during categorization in ways that predict high-level behaviors endemic to those regions. For example, in participants from across the country or world, we are examining how gender or race categorization dynamics during early face processing predict the election of certain leaders or the presence of social inequalities in states and nations. We recently showed that the extent to which a female politician's face is construed as masculine only 380 milliseconds after its exposure was predictive of her electoral failure, particularly in conservative U.S. states (see video below; Hehman et al., 2014). In some cases, we also examine the role of neural responses assessed via fMRI in predicting downstream behavior. Relatedly, studies in the lab are relating real-time categorization dynamics to real-world social interactions using automated quantitative methods that track meaningful nonverbal information, facial cues, and bodily synchrony. Together, researchers in the lab are examining the relationships between millisecond-level categorization processes, social interaction, and real-world consequences.


Intersecting Social Dimensions
 

From another's face, multiple possible perceptions are available: sex, race, age, emotion, sexual orientation, personality traits, among others. We are investigating the conditions under which these dimensions are simultaneously perceived, as well as how these dimensions may be fundamentally intertwined and meaningfully interact. Social dimensions may interact due to top-down perceiver impacts, where existing knowledge structures, the stereotypes a perceiver holds, motivations, and other social factors throw different dimensions into interaction. For instance, shared stereotypes between men and Blacks (e.g., aggressive) and women and Asians (e.g., communal) leads Black male and Asian female faces to be categorized more quickly (Johnson et al., 2012). Stereotypes linking Blacks to anger and hostility lead even happy Black faces to be initially perceived as angry, a process that involves several prefrontal mechanisms, especially to correct this initial bias (Hehman et al., 2015). These shared stereotypes have a dynamic impact on the perception of ostensibly unrelated social categories, a process that appears to involve interplay of the orbitofrontal cortex and fusiform cortex (Stolier & Freeman, 2016). Another manner of social category intersection is through bottom-up target impacts, where the perceptual cues supporting different dimensions are inherently overlapping. One example would be that males are perceived more efficiently when angry but females more efficiently when happy, partly because what makes a face more masculine also makes it more angry and what makes a face more feminine also makes it more happy. We have recently been using multivariate fMRI approaches in tandem with mouse-tracking and other behavior to more directly examine the extent to which these various social category representations intersect and overlap, and how these are shaped by one's social experiences.

Modeling Person Perception
 

We use computational neural network modeling to examine the person perception process more rigorously. For example, our dynamic-interactive model aims to explain how the bottom-up extraction of facial, vocal, and bodily cues drives person perception while simultaneously constrained by top-down social cognitive processes (Freeman & Ambady, 2011). In this process, basic perceptions emerge from the continuous interaction between lower-level sensory processing and higher-order social cognition (e.g., stereotypes, goals). The model allows us to generate and test new hypotheses about the nature of person perception processing. Using model variants, we investigate the interrelations among cues, categories, stereotypes, and higher-order states, and how the dynamics through which they activate determine social judgments. The modeling work leads to unique predictions about a variety of person perception phenomena, which we then empirically test using behavioral and/or neural data. In some cases, we are implementing further models to test specific hypotheses about the nature of social categorization dynamics, the structure and function of category and stereotype representations, and the integration of bottom-up and top-down information in social judgment. We are also interested in mapping such models onto neuroimaging data (especially multivariate fMRI data) and identifying the neural mechanisms underlying various model components and layers of processing.

Cartoon illustration at top-right by Danielle Laurenti.
[Internal]