Modeling implicit learning in a cross-modal audio-visual serial reaction time task

Standard

Modeling implicit learning in a cross-modal audio-visual serial reaction time task. / Taesler, Philipp; Jablonowski, Julia; Fu, Qiufang; Rose, Michael.

In: COGN SYST RES, Vol. 54, 01.05.2019, p. 154-164.

Research output: SCORING: Contribution to journalSCORING: Journal articleResearchpeer-review

Harvard

APA

Vancouver

Bibtex

@article{2bb32d7ec1e046bca1b8e215c9f0623f,
title = "Modeling implicit learning in a cross-modal audio-visual serial reaction time task",
abstract = "This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p ",
keywords = "Implicit learning, Cross-modal, Modeling, Serial reaction time task, Audio-visual",
author = "Philipp Taesler and Julia Jablonowski and Qiufang Fu and Michael Rose",
year = "2019",
month = may,
day = "1",
doi = "10.1016/j.cogsys.2018.10.002",
language = "English",
volume = "54",
pages = "154--164",
journal = "COGN SYST RES",
issn = "2214-4366",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - Modeling implicit learning in a cross-modal audio-visual serial reaction time task

AU - Taesler, Philipp

AU - Jablonowski, Julia

AU - Fu, Qiufang

AU - Rose, Michael

PY - 2019/5/1

Y1 - 2019/5/1

N2 - This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p 

AB - This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p 

KW - Implicit learning

KW - Cross-modal

KW - Modeling

KW - Serial reaction time task

KW - Audio-visual

U2 - 10.1016/j.cogsys.2018.10.002

DO - 10.1016/j.cogsys.2018.10.002

M3 - SCORING: Journal article

VL - 54

SP - 154

EP - 164

JO - COGN SYST RES

JF - COGN SYST RES

SN - 2214-4366

ER -