Student Theses and Dissertations


Zetian Yang

Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)

RU Laboratory

Freiwald Laboratory


Faces contain a plethora of information crucial for social interactions. Facial information could be transmitted either through facial shape or motion. The last two decades have established that a network of face-selective areas in the temporal lobe of macaque monkeys supports the visual processing of faces. Most of these studies focused on the processing of static facial shape. They found that each area within the face network contains a large fraction of face-selective cells. And each area encodes facial identity and head orientation differently. A recent brain imaging study discovered a new face area outside of this classic network, the medio-dorsal face area (MD). This finding offers the opportunity to determine whether coding principles revealed inside the core network would generalize to face areas outside the core network. We investigated the encoding of static faces and objects, facial identity, and head orientation, dimensions which had been studied in multiple areas of the core faceprocessing network before, as well as facial expressions and gaze in MD. We found that MD populations form a face-selective cluster with a degree of selectivity comparable to that of areas in the core face-processing network. MD encodes facial identity, expression, and head orientation robustly and independently from each other. Furthermore, MD also encodes the direction of gaze, in addition to head orientation. Thus MD contains a heterogeneous population of cells that establish a multi-dimensional code for static facial shape. Faces could also move in a highly nonlinear fashion, displaying both complex motion patterns and changes in facial shape. Previous brain imaging studies suggest that MD is a face-motion area, but direct evidence from electrophysiology is still missing. We recorded single-unit activities from MD and tested if MD cells are selective to face motion. We found MD cells which only respond to the simultaneous presence of facial shape and motion, thus are truly integrating the two. We found no evidence for two separate MD populations in which one is selective only to facial shape and the other to general motion. Interestingly, MD cells represent face motion in a higher-dimensional space than the optic flow of the stimuli and are highly sensitive to physically subtle face motion, like the shifting of gaze direction. Further, MD cells encode face motion utilizing multiple reference frames to separate the motion of the entire head from that of facial parts. Thus, MD is a bonafide face-motion selective area and might create representation for face motion using canonical neural computations. Finally, we found that MD responds with much a shorter delay than any other face areas. It starts providing facial information as early as 30 ms. We then performed connectivity experiments to study where MD sends such fast information. We found MD is connected to a variety of high-level areas in the prefrontal and other cortices, showing a connectivity pattern strikingly different from those of the other face areas. In sum, MD packs multiple computations into a single area and enables rapid multidimensional face analysis. It also sends such information to downstream areas, possibly for high-level cognitive and social processing. This makes MD an ideal area to support real-life facial interactions.


A Thesis Presented to the Faculty of The Rockefeller University in Partial Fulfillment of the Requirements for the degree of Doctor of Philosophy

Available for download on Thursday, February 22, 2024

Included in

Life Sciences Commons