A Spatially-Coded Visual Brain-Computer Interface for Flexible Visual Spatial Information Decoding

Standard

A Spatially-Coded Visual Brain-Computer Interface for Flexible Visual Spatial Information Decoding. / Chen, Jingjing; Wang, Yijun; Maye, Alexander; Hong, Bo; Gao, Xiaorong; Engel, Andreas K; Zhang, Dan.

in: IEEE T NEUR SYS REH, Jahrgang 29, 2021, S. 926-933.

Publikationen: SCORING: Beitrag in Fachzeitschrift/ZeitungSCORING: ZeitschriftenaufsatzForschungBegutachtung

Harvard

APA

Vancouver

Bibtex

@article{38d686ce50404fa8885072c33d25842d,
title = "A Spatially-Coded Visual Brain-Computer Interface for Flexible Visual Spatial Information Decoding",
abstract = "Conventional visual BCIs, in which control channels are tagged with stimulation patterns to elicit distinguishable brain patterns, has made impressive progress in terms of the information transfer rates (ITRs). However, less development has been seen with respect to user experience and complexity of the technical setup. The requirement to tag each of targets by a unique stimulus substantially limits the flexibility of conventional visual BCI systems. A method for decoding the targets in the environment flexibly was therefore proposed in the present study. A BCI speller with thirteen symbols drawn on paper was developed. The symbols were interspersed with four flickers with distinct frequencies, but the user did not have to gaze at flickers. Rather, subjects could spell a sequence by looking at the symbols on the paper. In a cue-guided spelling task, the average offline and online accuracies reached 89.3± 7.3% and 90.3± 6.9% for 13 subjects, corresponding to ITRs of 43.0± 7.4 bit/min and 43.8± 6.8 bit/min. In an additional free-spelling task for seven out of thirteen subjects, an accuracy of 92.3± 3.1% and an ITR of 45.6± 3.3 bit/min were achieved. Analysis of a simulated online system showed the possibility to reach an average ITR of 105.8 bit/min by reducing the epoch duration from 4 to 1 second. Reliable BCI control is possible by gazing at targets in the environment instead of dedicated stimuli which encode control channels. The proposed method can drastically reduce the technical effort for visual BCIs and thereby advance their applications outside the laboratory.",
keywords = "Brain, Brain-Computer Interfaces, Electroencephalography, Evoked Potentials, Visual, Humans, Online Systems, Photic Stimulation",
author = "Jingjing Chen and Yijun Wang and Alexander Maye and Bo Hong and Xiaorong Gao and Engel, {Andreas K} and Dan Zhang",
year = "2021",
doi = "10.1109/TNSRE.2021.3080045",
language = "English",
volume = "29",
pages = "926--933",
journal = "IEEE T NEUR SYS REH",
issn = "1534-4320",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - JOUR

T1 - A Spatially-Coded Visual Brain-Computer Interface for Flexible Visual Spatial Information Decoding

AU - Chen, Jingjing

AU - Wang, Yijun

AU - Maye, Alexander

AU - Hong, Bo

AU - Gao, Xiaorong

AU - Engel, Andreas K

AU - Zhang, Dan

PY - 2021

Y1 - 2021

N2 - Conventional visual BCIs, in which control channels are tagged with stimulation patterns to elicit distinguishable brain patterns, has made impressive progress in terms of the information transfer rates (ITRs). However, less development has been seen with respect to user experience and complexity of the technical setup. The requirement to tag each of targets by a unique stimulus substantially limits the flexibility of conventional visual BCI systems. A method for decoding the targets in the environment flexibly was therefore proposed in the present study. A BCI speller with thirteen symbols drawn on paper was developed. The symbols were interspersed with four flickers with distinct frequencies, but the user did not have to gaze at flickers. Rather, subjects could spell a sequence by looking at the symbols on the paper. In a cue-guided spelling task, the average offline and online accuracies reached 89.3± 7.3% and 90.3± 6.9% for 13 subjects, corresponding to ITRs of 43.0± 7.4 bit/min and 43.8± 6.8 bit/min. In an additional free-spelling task for seven out of thirteen subjects, an accuracy of 92.3± 3.1% and an ITR of 45.6± 3.3 bit/min were achieved. Analysis of a simulated online system showed the possibility to reach an average ITR of 105.8 bit/min by reducing the epoch duration from 4 to 1 second. Reliable BCI control is possible by gazing at targets in the environment instead of dedicated stimuli which encode control channels. The proposed method can drastically reduce the technical effort for visual BCIs and thereby advance their applications outside the laboratory.

AB - Conventional visual BCIs, in which control channels are tagged with stimulation patterns to elicit distinguishable brain patterns, has made impressive progress in terms of the information transfer rates (ITRs). However, less development has been seen with respect to user experience and complexity of the technical setup. The requirement to tag each of targets by a unique stimulus substantially limits the flexibility of conventional visual BCI systems. A method for decoding the targets in the environment flexibly was therefore proposed in the present study. A BCI speller with thirteen symbols drawn on paper was developed. The symbols were interspersed with four flickers with distinct frequencies, but the user did not have to gaze at flickers. Rather, subjects could spell a sequence by looking at the symbols on the paper. In a cue-guided spelling task, the average offline and online accuracies reached 89.3± 7.3% and 90.3± 6.9% for 13 subjects, corresponding to ITRs of 43.0± 7.4 bit/min and 43.8± 6.8 bit/min. In an additional free-spelling task for seven out of thirteen subjects, an accuracy of 92.3± 3.1% and an ITR of 45.6± 3.3 bit/min were achieved. Analysis of a simulated online system showed the possibility to reach an average ITR of 105.8 bit/min by reducing the epoch duration from 4 to 1 second. Reliable BCI control is possible by gazing at targets in the environment instead of dedicated stimuli which encode control channels. The proposed method can drastically reduce the technical effort for visual BCIs and thereby advance their applications outside the laboratory.

KW - Brain

KW - Brain-Computer Interfaces

KW - Electroencephalography

KW - Evoked Potentials, Visual

KW - Humans

KW - Online Systems

KW - Photic Stimulation

U2 - 10.1109/TNSRE.2021.3080045

DO - 10.1109/TNSRE.2021.3080045

M3 - SCORING: Journal article

C2 - 33983885

VL - 29

SP - 926

EP - 933

JO - IEEE T NEUR SYS REH

JF - IEEE T NEUR SYS REH

SN - 1534-4320

ER -