Representational Dynamics of Facial Viewpoint Encoding

Standard

Representational Dynamics of Facial Viewpoint Encoding. / Kietzmann, Tim C; Gert, Anna L; Tong, Frank; König, Peter.

In: J COGNITIVE NEUROSCI, Vol. 29, No. 4, 04.2017, p. 637-651.

Research output: SCORING: Contribution to journalSCORING: Journal articleResearchpeer-review

Harvard

Kietzmann, TC, Gert, AL, Tong, F & König, P 2017, 'Representational Dynamics of Facial Viewpoint Encoding', J COGNITIVE NEUROSCI, vol. 29, no. 4, pp. 637-651. https://doi.org/10.1162/jocn_a_01070

APA

Kietzmann, T. C., Gert, A. L., Tong, F., & König, P. (2017). Representational Dynamics of Facial Viewpoint Encoding. J COGNITIVE NEUROSCI, 29(4), 637-651. https://doi.org/10.1162/jocn_a_01070

Vancouver

Bibtex

@article{8d3595ed90cf4c8b846b9b95ad97817d,
title = "Representational Dynamics of Facial Viewpoint Encoding",
abstract = "Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all viewpoints. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demonstrated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity.",
author = "Kietzmann, {Tim C} and Gert, {Anna L} and Frank Tong and Peter K{\"o}nig",
year = "2017",
month = apr,
doi = "10.1162/jocn_a_01070",
language = "English",
volume = "29",
pages = "637--651",
journal = "J COGNITIVE NEUROSCI",
issn = "0898-929X",
publisher = "MIT Press",
number = "4",

}

RIS

TY - JOUR

T1 - Representational Dynamics of Facial Viewpoint Encoding

AU - Kietzmann, Tim C

AU - Gert, Anna L

AU - Tong, Frank

AU - König, Peter

PY - 2017/4

Y1 - 2017/4

N2 - Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all viewpoints. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demonstrated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity.

AB - Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all viewpoints. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demonstrated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity.

U2 - 10.1162/jocn_a_01070

DO - 10.1162/jocn_a_01070

M3 - SCORING: Journal article

C2 - 27791433

VL - 29

SP - 637

EP - 651

JO - J COGNITIVE NEUROSCI

JF - J COGNITIVE NEUROSCI

SN - 0898-929X

IS - 4

ER -