-
Helms posted an update 7 months, 2 weeks ago
In the third stage, the theta increased effect and alpha decreased effect were related to different feedback. This pilot study suggests that the time-frequency analysis of EEG signals could be a great tool to visualize the brain activities in response to different cognitive tasks.We analyze the efficiency of motor unit (MU) filter prelearning from high-density surface electromyographic (HDEMG) recordings of voluntary muscle contractions in the identification of the motor unit firing patterns during elicited muscle contractions. Motor unit filters are assessed from 10 s long low level isometric voluntary contractions by gradient-based optimization of three different cost functions and then applied to synthetic HDEMG recordings of elicited muscle contractions with dispersion of motor unit firings ranging from 13 ms to 1 ms. selleck products We demonstrate that the number of identified MUs and the precision of MU identification depend significantly on the selected cost function. Regardless the selected cost function and MU synchronization level, the median precision of motor unit identification in elicited contraction is ≥ 95 % and is comparable to the precision in voluntary contractions. On the other hand, median miss rate increases significantly from less then 1 % to ~ 3 % with the tested level of MU synchronization.Clinical Relevance-The identification of MU firings from HDEMG in elicited muscle contractions provides a new tool for in vivo investigation of motor excitability in humans.The absence of epileptiform activity in a scalp electroencephalogram (EEG) recorded from a potential epilepsy patient can cause delays in clinical care delivery. Here we present a machine-learning-based approach to find evidence for epilepsy in scalp EEGs that do not contain any epileptiform activity, according to expert visual review (i.e., “normal” EEGs). We found that deviations in the EEG features representing brain health, such as the alpha rhythm, can indicate the potential for epilepsy and help lateralize seizure focus, even when commonly recognized epileptiform features are absent. Hence, we developed a machine-learning-based approach that utilizes alpha-rhythm-related features to classify 1) whether an EEG was recorded from an epilepsy patient, and 2) if so, the seizure-generating side of the patient’s brain. We evaluated our approach using “normal” scalp EEGs of 48 patients with drug-resistant focal epilepsy and 144 healthy individuals, and a naive Bayes classifier achieved area under ROC curve (AUC) values of 0.81 and 0.72 for the two classification tasks, respectively. These findings suggest that our methodology is useful in the absence of interictal epileptiform activity and can enhance the probability of diagnosing epilepsy at the earliest possible time.Brain-computer interface (BCI) systems enable humans to communicate with a machine in a non-verbal and covert way. Many past BCI designs used visual stimuli, due to the robustness of neural signatures evoked by visual input. However, these BCI systems can only be used when visual attention is available. This study proposes a new BCI design using auditory stimuli, decoding spatial attention from electroencephalography (EEG). Results show that this new approach can decode attention with a high accuracy (>75%) and has a high information transfer rate (>10 bits/min) compared to other auditory BCI systems. It also has the potential to allow decoding that does not depend on subject-specific training.Sleep disorder is one of many neurological diseases that can affect greatly the quality of daily life. It is very burdensome to manually classify the sleep stages to detect sleep disorders. Therefore, the automatic sleep stage classification techniques are needed. However, the previous automatic sleep scoring methods using raw signals are still low classification performance. In this study, we proposed an end-to-end automatic sleep staging framework based on optimal spectral-temporal sleep features using a sleep-edf dataset. The input data were modified using a bandpass filter and then applied to a convolutional neural network model. For five sleep stage classification, the classification performance 85.6% and 91.1% using the raw input data and the proposed input, respectively. This result also shows the highest performance compared to conventional studies using the same dataset. The proposed framework has shown high performance by using optimal features associated with each sleep stage, which may help to find new features in the automatic sleep stage method.Clinical Relevance- The proposed framework would help to diagnose sleep disorders such as insomnia by improving sleep stage classification performance.Recent advancements in wearable technologies have increased the potential for practical gesture recognition systems using electromyogram (EMG) signals. However, despite the high classification accuracies reported in many studies (> 90%), there is a gap between academic results and industrial success. This is in part because state-of-the-art EMG-based gesture recognition systems are commonly evaluated in highly-controlled laboratory environments, where users are assumed to be resting and performing one of a closed set of target gestures. In real world conditions, however, a variety of non-target gestures are performed during activities of daily living (ADLs), resulting in many false positive activations. In this study, the effect of ADLs on the performance of EMG-based gesture recognition using a wearable EMG device was investigated. EMG data for 14 hand and finger gestures, as well as continuous activity during uncontrolled ADLs (>10 hours in total) were collected and analyzed. Results showed that (1) the cluster separability of 14 different gestures during ADLs was 171 times worse than during rest; (2) the probability distributions of EMG features extracted from different ADLs were significantly different (p less then ; 0.05). (3) of the 14 target gestures, a right angle gesture (extension of the thumb and index finger) was least often inadvertently activated during ADLs. These results suggest that ADLs and other non-trained gestures must be taken into consideration when designing EMG-based gesture recognition systems.