An Embedded EOG-based BCI System for Robotic Control
Valeria TOMASELLI
Senior Engineer
STMicroelectronics
Brain-Computer Interface (BCI) or Human–Computer Interface (HCI) is an emerging technology that allows establishing of a direct communication link between the human brain and an external device; it was mainly conceived to assist people with severe motor disabilities, helping them to reestablish communicative and environmental control abilities.
Patients suffering from disabilities such as locked-in syndrome (LIS) often retain the ability to control their eye movements. Electrooculography (EOG) contains highly recognizable information of eyelid movements such as blinks and winks which can be clearly recorded with low-cost devices.
This makes them exploitable as a valuable source of information especially for control applications; as a result, EOG can enable impaired people to autonomously move around by
controlling Electrically Powered Wheelchairs (EPW), interacting with their domestic smart environment, or even communicating with others through the use of virtual spellers.
The development of alternative ways to control external devices, without relying on language or body motions, is important for both motor-impaired and healthy subjects. Generally, BCI systems in this field have some drawbacks which can be summarized in the following points: few control dimensions, low classification accuracy, the need to execute commands synchronously with an external stimulus, and the requirement of extensive training from the subjects to be able to control the system. In order to address some of the most impeding disadvantages of the BCI technology, we realized a wearable BCI system that runs all the necessary steps from acquisition to the final inference and command transmission on a small SoC powered by MCU (MicroController Unit) and batteries. TinyML is well-suited for these systems due to the limited resources available on many BCI devices. By using small and efficient tinyML models, it is possible to run the necessary algorithms on resource-constrained devices, such as MCUs.
Firstly, we treat EOG signals as a source of control commands, since these signals are highly distinguishable from the rest of the brain activity and also because they can be easily generated by a user, without any previous training on the system usage. We collected left, and right winks and both voluntary and involuntary blinks on two channels from the Fp1 and Fp2 electrodes, shown in Fig. 1, with the aim to use them as control commands. The effort to distinguish voluntary and involuntary eye blinks is made to prevent unwanted inputs.
The whole process, from acquisition to the final inference, is achieved through three processing blocks embedded in the firmware of the device (e.g. pre-processing, event detection, and event classification) which are the main focus of this work.
Firstly, the signals are pre-processed with digital filters to remove external noise from data. After that, an event detection algorithm is used to select portions of the incoming data stream, with some clear activity, that are passed as inputs to the classifier for the recognition of such activity. The event detector is useful to achieve asynchronous operation and allows to improve the usage of computational resources of the microcontroller, avoiding constant classification of the idle state where there are no useful commands. Finally, the selected signals go through the event classifier which is a 1-dimensional (Conv 1D) CNN model to classify voluntary and involuntary eye blinks and left/right eye winks. The tinyML model was trained with the dataset we collected from eight volunteers, using our custom board. After the training phase, the model was converted to a C equivalent architecture and embedded in the firmware. The developed tinyML model is well below the MCUs computational constraint while achieving an average classification accuracy of the four classes of 99,3%. The proposed BCI system has been used to remotely control three degrees of freedom (DoF) wheeled robots using left/right winks as rotation commands and single and double blinks as go and stop commands. Different subjects, both with some and no experience with the BCI system, controlled the robot by following a traced path on the floor without any difficulty, achieving the reported command accuracy in the real-world setting.