1 / 26

Designing Brain Computer Interfaces Using Visual Evoked Potentials

Designing Brain Computer Interfaces Using Visual Evoked Potentials. Deniz Erdogmus Cognitive Systems Laboratory Northeastern University. References and Acknowledgments. Papers and videos are available at: http://www.ece.neu.edu/~erdogmus

Télécharger la présentation

Designing Brain Computer Interfaces Using Visual Evoked Potentials

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Designing Brain Computer Interfaces Using Visual Evoked Potentials DenizErdogmus Cognitive Systems Laboratory Northeastern University

  2. References and Acknowledgments Cognitive Systems Laboratory Northeastern University • Papers and videos are available at: http://www.ece.neu.edu/~erdogmus • For communication, please send me an email at: erdogmus@ece.neu.edu • Our research on BCI had been funded by DARPA, NSF, NIH, and NLMFF. • Thanks: • NU: UmutOrhan, ShaliniPurwar, HoomanNezamfar, TanaratDityam, Capstone and REU students • OHSU: Catherine Huang, Kenneth Hild, Brian Roark, Barry Oken, Melanie Fried-Oken, MishaPavel, OHSU team. • Honeywell: SantoshMathan

  3. The EEG-BCI Concept Cognitive Systems Laboratory Northeastern University

  4. Typical Brain Signals Exploited in EEG-BCI Design Thanks: GerwinSchalk Cognitive Systems Laboratory Northeastern University • Visual or auditory evoked P300 response (ERP) • Wadsworth BCI (Walpow & Schalk) • Gtec and Graz-BCI (Pfurtscheller et al) • Columbia Univ (Sajda & Parra) • Northeastern, Honeywell (Erdogmus& Mathan) • Motor imagery (MI) or other synchronization / dysynchronization activity in the motor cortex • Wadsworth BCI • Berlin BCI (Muller, Blankertz) • UMN (He) • Steady state visual evoked potentials (SSVEP) • Graz BCI (Allison et al) • Tsinghua (Gao et al) • Northeastern, Honeywell (Erdogmus & Mathan)

  5. Qualitative Comparison: Training Time versus Bandwidth Table. Area under the ROC curve (AUC) for binary intent classification when subject is focused and distracted. Cognitive Systems Laboratory Northeastern University • We performed a binary (left/right) intent communication experiment using these three signals with a naïve subject. • P3a: Left/right square flashes • MI: Left/right hand-tap imagined • SSVEP: Left/right square flickers • Experiment conducted in two modes: focused & distracted • Focused: Sit still on chair and focus on task as prompted • Distraction: Tap feet on floor as if walking while simultaneously attending task as prompted

  6. Experiment Details for Table on Previous Slide Cognitive Systems Laboratory Northeastern University • P300 task • Short white square pulses randomly every [500,600)ms. Left/right flashes desynchronized. Post-stimulus onset 500-ms EEG from [O1, O2, POZ, OZ, FC1, CZ, P1, P2, C1, C2, C3, C4, CP3, CP4] used in RDA. • MI task • Subject visually instructed to imagine left/right hand motion. Bipolar C3-C4 & CP3-CP4 used in RDA with 4-sec windowed PSD features. • VEP task • M-seq VEP obtained using two flickering checkerboards. M-seq 31-periodic, presented at 15bits/sec (decision time of 2.3 seconds) . Template matching classifier using signals from [O1, O2, POz, Oz].

  7. Image Triage BCI using P3 and RSVP Cognitive Systems Laboratory Northeastern University Present sequence of images using rapid serial visual presentation (RSVP). RSVP performed twice to prevent misses. Tag potential target images using single-trial P3 detection. Result: 6-fold speed-up compared to manual tagging.

  8. Image Triage BCI System Overview Cognitive Systems Laboratory Northeastern University

  9. RSVP and P3 Cognitive Systems Laboratory Northeastern University

  10. + + RSVP Cognitive Systems Laboratory Northeastern University • Targets are rare (<1%), non-target distractors are numerous • Image presentation duration between 50-200 ms/image

  11. Support Vector Machines f: Gaussian kernel eigenfunctions Cognitive Systems Laboratory Northeastern University • Our baseline classifier is a Gaussian-SVM. SVMs • Map input EEG features to a high-dimensional space via kernel eigenfunctions • Identify optimal linear binary classification boundary that maximizes the margin • Find a small number of support vectors that are closest to the boundary, such that SV-to-boundary distances are equal to this maximal margin • Evaluates the class label for new samples by comparing them to these SVs only • Gaussian-SVM and Linear-SVM to detect the presence of P3:

  12. Multi-session Calibration of GSVM P3 Detector Cognitive Systems Laboratory Northeastern University We performed a BCI image triage experiment using 4 naïve subjects, each for 10 sessions (5 consecutive days, morning/afternoon). Trained 9 Gaussian-SVM P3 detectors as follows: GSVMi is trained on data from sessions {1,…,i} and tested on session (i+1). The average AUC on the test data is reported here for each subject/session with estimated error bars.

  13. Multi-session Incremental SVM Training Cognitive Systems Laboratory Northeastern University SVM training complexity increases superlinearly with number of samples. Incremental SVM is based on the premise that only support vectors of a previously examined training set are necessary to remember. Incremental SVM (iSVM) training proceeds as follows: train iSVMi using support vectors of {iSVM1,…, iSVM(i-1)} and training set i.

  14. design matrix observations errors population effects random effects Mixed Effects Modeling (MEM) of ERP yi, Xi, and Zi are known a, bi, and ei are unknown raw EEG signals mean fixed effects std of rand eff. std of noise Cognitive Systems Laboratory Northeastern University Single trial ERP/non-ERP responses to each image are variable. MEM tries to capture this variability using a simple hierarchical Bayesian approach. In particular, we used a Gaussian graphical model.

  15. Fisher Kernel Cognitive Systems Laboratory Northeastern University Fisher score as feature transform: Fisher information matrix inverse as Riemanian metric: Linear Fisher kernel: Gaussian Fisher kernel:

  16. ROC Curves:MEM Likelihood Ratio, LFK-SVM, L-SVM, G-SVM P1: MEM vs. FKSVM; P2: LinearSVM vs. FKSVM; P3: GKSVM vs. FKSVM Mean_MEM=0.846Mean_LinearSVM=0.846Mean_GKSVM=0.874Mean_FKSVM=0.892 Cognitive Systems Laboratory Northeastern University

  17. RSVP Keyboard:A Spelling Interface based on the P3 Signal Subject controls epoch start time RSVP of letters 100ms-500ms per letter Duty cycle around 50-80% (each letter is followed by a black screen) 400ms 1000ms Cognitive Systems Laboratory Northeastern University A sample 1-sequence training epoch… Session > Epoch > Sequence > Trial Multiple sequences of same letters shuffled => multi-trial ERP detection

  18. RSVP Keyboard:Fusing Language Model & EEG Evidence Cognitive Systems Laboratory Northeastern University RSVP Keyboard makes letter selections based on joint evidence from an n-gram language model at the symbol level and EEG evidence from RSVP of letter sequences as illustrated before. Language model is trained using one or a combination of Wall Street Journal, Enron Emails, and self-provided previous conversation scripts from subject. For the following off-line analysis, we use Bayes’ theorem to obtain a likelihood ratio test based decision mechanism.

  19. RSVP Keyboard:Off-line Analysis Results - AUC Cognitive Systems Laboratory Northeastern University

  20. RSVP Keyboard:Off-line Analysis Results – TP at 5%FP op. point Cognitive Systems Laboratory Northeastern University

  21. RSVP Keyboard:Videos of Locked-in Consultant Typing Cognitive Systems Laboratory Northeastern University • 2011-02-02 Preparing for a session • 2011-05-01 A typical calibration phase • 2011-02-02 Locked-in subject G • Calibration; subject’s face • Free typing; screen • 2011-02-02 Healthy subject K • Free typing; subject’s face • Free typing; screen • 2011-06-21 Computational linguistics conference (Portland, OR) • OPB News • Fox12 News

  22. SSVEP-BCI This paradigm exploits visual cortex response patterns observed due to periodic flickering of visual stimulus (spatial pattern or light source). Frequency and pseudorandom binary sequence (PRBS) variations are used. Frequency PRBS Temporal stimulus pattern is ~: 11101010000100… Different symbols have different bit sequences. Number of distinct sequences imposes a limit. Codes are ultra-wideband. Narrow-band artifacts present a smaller problem. • Temporal stimulus pattern is: 0101010101010… • Different symbols have different bit presentation rates. • Frequency resolution of PSD estimation imposes a limit. • Artifacts and background brain activity overlap with stimulus response in Fourier domain. Cognitive Systems Laboratory Northeastern University

  23. Average Oz Response to M-sequence Stimuli Cognitive Systems Laboratory Northeastern University

  24. Single-channel Template Matching Classifier Accuracy as a Scalp Distribution Cognitive Systems Laboratory Northeastern University Figure. Spatial distribution of single-channel correct classification probability among 4 m-sequences for 4 subjects, 15 & 30 bits/sec.

  25. Undergraduate Research Projects Cognitive Systems Laboratory Northeastern University • Brain-controlled iCreate (April 2010) 2010 ECE Capstone 1st position • Brain-controlled Flight Simulator (April 2011) 2011 ECE Capstone 3rd position • Gaze-controlled Robotic Manipulator (April 2011) 2011 ECE Capstone 2nd position • Voice-controlled Wheelchair (April 2011) 2011 ECE Capstone 1st position • Brain-controlled Wheelchair (August 2011) 2011 REU Summer Intern Project

  26. Future Work Cognitive Systems Laboratory Northeastern University • All work so far had been using Gtec’sLabview software for the USBamp. • We are making a transition to the Matlab API now. • Signal processing had been kept extremely simple in real-time applications. SVM variations achieve better single-trial EEG classification as we have seen. • We will improve signal processing models for VEP classification without sacrificing the ability to operate in real-time. • Artifact management in real-time is an important issue not addressed in detail yet. We have reference-based (e.g. EOG) supervised least-squares artifact reduction module and static ICA based artifact reduction (disabled). • We will implement other supervised and unsupervised artifact detection and reduction modules to work in real-time. • We have little experience in the MI-BCI but there have been very impressive proof-of-concept experiments over the last decade. • We will start exploring MI-BCI design in order to catch up with capabilities demonstrated by other groups.

More Related