Project Information

Currently our end goal is to have an adapter that is able to quickly and accurately determine the users intent through a Brain Computer Interface (BCI), and process this intent into a signal that is readable by a Virtual Reality (VR) headset. Effectively we would like to create a VR "controller" that is hands-free. We aim to turn brain signals into the typical button presses such as 'A' 'B' 'LB' 'RB' and feed this input into a VR headset where the instructions are read and processed into visual movement and actions.

This is a fairly good informational video that relates to how BCI is intended to work. This is the website for the hardware in use in the video.

As shown in the video, OpenBCI sells a neuro helmet that has electrodes attached at key locations on the skull where they can read the output electrical signals from the brain. These electrodes then are connected to an OpenBCI board that is loaded with software to process the incoming data. This board can then be connected to a computer where the OpenBCI GUI is installed, and the computer can link via cable, Bluetooth, or Wi-Fi. With all three systems working together the data can then be gathered from the user and processed into deliverable commands.

What is a BCI and how does it work?

BCI stands for Brain Computer Interface, and its goal is to connect the human brain to external electronics or other technologies. Just as our bodies respond to the various electrical pulses that our brains send out, these signals could be potentially be used to control various technologies in the same way. By translating these brain signals into something much simpler, pressing a button for instance, a person can be able to manipulate a variety of devices with only a thought. Typically these brain signals can be captured in two different ways, one of which requires surgery, the other which does not. The surgical way of doing things is to implant special wires within the skin around the head and attach them to the nerves for a very high definition signal. The non-invasive way is much simpler, but has a much lower degree of accuracy. This method uses electrodes that are connected to metal pieces that rest on key parts of your head, close to the nerves. In either case, the electrical signals sent by the brain will travel through wires into a device that will translate them into usable signals. The base data that is collected will be in the form of brain waves, but these are essentially useless to us until they are separated out into different bands of frequencies and examined for patterns. Once these patterns are detected, either by using a software program or by eye, the signals can then be formulated into and input signal that can be used by whatever device it is formatted for. This is where BCI becomes so useful and potentially amazing. Things that used to require a fair amount of physical exertion may now become as simple as thinking a thought. Changing the channel on a TV, playing a video game, or even changing the thermostat in your house all become possible without the need for ever pressing any buttons.

Potential Hardware

The Board

There are currently 3 boards to choose from:

  • Ganglion Board 4 channels ($249.99)

  • Cyton Biosensing Board 8 Channels ($499.99)

  • Cyton+Daisy Biosensing Boards 16 channel ($949.99)

The main difference between the board seems to be the number of channels that they support (4, 8, or 16) and the type of software that is used in order to run and communicate with the board. The most likely candidates for the hardware will be either the 4 channel board or the 8 channel board, but due to monetary restraints we may have to make due with lower resolution devices.

The Headset

On the website there are 3 types of headsets:

  • Ultracortex "Mark IV" EEG Headset (From $349.99)

  • EEG Electrode Cape Kit (From $399.99)

  • OpenBCI EEG Headband Kit ($199.99)

The headsets are a bit more tricky to deal with, as they are sold in a different way than the boards. The headsets are focused around how many electrodes they support (how many channels) and where they are placed on the head. The headband option would be cheaper, but the Ultracortex does a better job of getting multiple points around and on top of the head.

Another potential source of hardware could be from https://electrodestore.com/collections/eeg-electrodes . They sell much of the same items seperately for a slightly cheaper price. The question is though, whether there is any compatibility issues.

The Choice

In all likelihood our best option would be to get the Cyton Biosensing Board 8 Channels, along with the Ultracortex "Mark IV" EEG Headset. With access to the 3d printers we can go for the cheaper headset option where we print the extra headset pieces to attach to the helmet piece. With these two options we will be able to use 8 channel sampling as opposed to 4 channel sampling which will hopefully increase our accuracy in detecting and converting signals.

IT'S ALIVE!!!

This is a screenshot of the GUI after first setting up the Cyton board. Head cups have yet to be installed, but the firmware has installed correctly and there are no errors being thrown. The next step shall be to set up the head cups and sort out what data we are receiving.

After setting up the electrode head cups, we can now start collecting data and seeing the electrical signals that are coming from the brain.

This is what the setup looks like when all the pieces are put together on a human subject. The BCI headset seems to fit fairly well, but depending on head size, the need for a smaller/larger helmet will be needed. At the very least, the BCI headset does not interfere too badly with the VR headset, which can be easily fixed by editing the 3D printed BCI helmet.

Code Experimentation

MATLAB

%% instantiate the library

disp('Loading the library...');

lib = lsl_loadlib();


% resolve a stream...

disp('Resolving an EEG stream...');

result = {};

while isempty(result)

result = lsl_resolve_byprop(lib,'type','EEG'); end


% create a new inlet

disp('Opening an inlet...');

inlet = lsl_inlet(result{1});


disp('Now receiving data...');

while true

% get data from the inlet

[vec1,ts1] = inlet.pull_sample();

[vec2,ts2] = inlet.pull_sample();

[vec3,ts3] = inlet.pull_sample();

[vec4,ts4] = inlet.pull_sample();

[vec5,ts5] = inlet.pull_sample();

[vec6,ts6] = inlet.pull_sample();

[vec7,ts7] = inlet.pull_sample();

[vec8,ts8] = inlet.pull_sample();

% and display it

fprintf('%.2f\t',vec1);

fprintf('\n');

fprintf('%.2f\t',vec2);

fprintf('\n');

fprintf('%.2f\t',vec3);

fprintf('\n');

fprintf('%.2f\t',vec4);

fprintf('\n');

fprintf('%.2f\t',vec5);

fprintf('\n');

fprintf('%.2f\t',vec6);

fprintf('\n');

fprintf('%.2f\t',vec7);

fprintf('\n');

fprintf('%.2f\t',vec8);

fprintf('\n');


M1=mean(vec1);

M2=mean(vec2);

M3=mean(vec3);

M4=mean(vec4);

M5=mean(vec5);

M6=mean(vec6);

M7=mean(vec7);

M8=mean(vec8);


fprintf('%.2f\t\n',M1);

fprintf('%.2f\t\n',M2);

fprintf('%.2f\t\n',M3);

fprintf('%.2f\t\n',M4);

fprintf('%.2f\t\n',M5);

fprintf('%.2f\t\n',M6);

fprintf('%.2f\t\n',M7);

fprintf('%.2f\t\n',M8);

end


Python

"""Example program to show how to read a multi-channel time series from LSL."""

import time

from pylsl import StreamInlet, resolve_stream, StreamInfo, StreamOutlet

# first resolve an EEG stream on the lab network

print("looking for EEG stream...")

streams = resolve_stream('type', 'EEG')

# create a new inlet to read from the stream

inlet = StreamInlet(streams[0])

rows, cols = (8, 125)

samples = [[0 for i in range(cols)] for j in range(rows)]

# create a new outlet

info = StreamInfo('eeg_stream2', 'EEG2', 1, 250, 'float32', 'myuidemgemg')

outlet = StreamOutlet(info)

while True:

# get a new sample (you can also omit the timestamp part if you're not

# interested in it)

for i in range(0, 8):

samples[i], timestamp = inlet.pull_sample()

sums = [0] * 8

for y in range(0, 8):

for x in range(100, 125):

sums[y] += samples[y][x]

for j in range(0, 8):

sums[j] /= 25

print(sums[j])

for sample in sums:

outlet.push_sample([sample])

time.sleep(0.004)




Setup and Results

For our purposes we utilized LSL streaming through the OpenBCI GUI in order to process the data in real-time.

We used some code written/used by a Paul Dominick Baniqued that was adapted from the open source LSL4Unity GitHub. He has a blog where he explains his code and for what purposes he wrote it for, which is linked below:

Link to Blog

Following most of his instructions with only slightly manipulating/changing his code, we can achieve the result of live streaming the BCI headset data into Unity to control an object within a 3D space.

To establish a connection with Unity between the OpenBCI GUI and and Python, we used the LSL4Unity open source library (linked below). The only file that is unneeded is the ".editorconfig" file, as it creates a requirement to use the "lsl.dll" library that is not included as standard within the GitHub package.

Link to LSL4Unity GitHub.

Similar to the blog, we created a cube object and selected "Add Component" to attach a script. Once attached there will be 3 input fields that need to be labeled as shown in the picture to the left. The stream Name and Type are changeable depending on what the output stream is named within the python Code. In our case the name is eeg_stream2 and the type is EEG2, but this can be changed to work for other projects/ideas/data.

BCI_Test1.mp4

This is the first test video after establishing the connection between the headset and Unity. Once the stream name and type are set, hitting play will run the script attached to the cube and cause it to react to the incoming data from the headset. When the cube moves up and down in the video it is from me stimulating the headset, not from any pre-recorded code, which is why the motion is very jittery and awkward.

We were able to replicate the example given by the blog and even expand upon its initial results. Whenever a large amount of brain activity is sensed, the cube floats higher. At this stage, however, the movements are very choppy and unrefined. We plan to improve upon this in the near future.