Biological Signals Identification

Hello everyone. I hope y’all are well!

My name is Juan Sebastian Conde. Recently, I have been doing some personal research regarding the use of machine learning and deep learning techniques to identify and determine important information in biosignals from ECG (electrocardiogram), Cortisol sensing analysis, EEG signals for anxiety and stress, among others.

During my research, I have found a very innovative approach based on the construction of recurrent plots (recurrent images) based on these signals. The idea is founded on the use of image recognition techniques to determine and categorise important events.

You can check here the underlying principle:
Garcia-Ceja, Enrique, Md Zia Uddin, and Jim Torresen. “Classification of recurrence plots’ distance matrices with a convolutional neural network for activity recognition.” Procedia computer science 130 (2018): 157-163.

Now, there is a particular question that I would like to ask anyone interested in this topic. But first, I would like to provide some context:

  • You are running an experiment with a couple of subjects. These subjects have sensors connected to their bodies, recording at bio-signals at 100Hz. In the experiment, each subject is sitting down and is watching at a screen that provides TWO simple instructions ( stand-up or lift your right arm). Each instruction is given at 8 seconds intervals.
    Now, to identify the potential bio-signals driven by these actions, the data for the analysis consists in:
  1. Raw signal from the sensor
  2. The times for the markers given the instructions to the subjects.

Okay?

Now comes the fun part. As you know, every subject is different, and their response might be different; additionally, each subject might execute those actions with a certain delay after each instruction is shown on the screen.

Since we are working with images that are built based on the signals, and the signals are changing all the time, it is assumed that this becomes a movie.

Is there a way to identify similar images that somehow correspond to those actions?
I imagine that there should be a technique that provides you with all the possible “classes” of images in the movie. However, there should be a certain particular repeating class that can be identified after each instruction is given.

Let me know your thoughts!

3 Likes

Hi Dr Robot
Thank you for your topic I found it really interesting.
I am a master student in mathimatical statistics researching in the application of deep learning on geophysics application, specifically seismic data which is also reported in time domain.
Because my statistical background, I noticed that many researchers are dealing with these kind of data without considering the the autocorrelation caused by time (or space) which relate observations (training data) to each other, for example I found some researchers using CNN arguing that the segmentation data consists of hundreds of images, It’s laughable that they end up with an accuracy around 42% which is worse that if the model was guessing by luck!!
Deep Learning models are built on statistical aspects and when its assumptions is violated we can end up with a poorly calibrated models with high bias and/or high variance no matter howmany layers we added of how can we change the hyperparameters.
So my advice is to use models that consider the autocorrelation in your data RNN with all its related algorithms.
Best Regards.

2 Likes

Hi Maryam, thank you so much for this!

It seems that using or implementing RNN is the way to move forward. On my continuous search for this problem, I have also come across another paper whereby implementing Recurrent plots and CNN are capable to classify different inputs. In this case for an EEG signal.

This is the paper in question: Meng, XianJia, et al. “A motor imagery EEG signal classification algorithm based on recurrence plot convolution neural network.” Pattern Recognition Letters 146 (2021): 134-141.

In this paper, the feature extraction is basically done in two parts first by applying a power normalised spectrum which say is based on the features of the human auditory system. Second, they use a Gabor filter from the time domain and frequency to extract more dynamic information. Here is the algorithm:

Here the segmentation of the signals goes in hand with markers which indicates the different times instructions were given to the users for certain specific tasks, yet again the timing of the action for each user might differ.

@paulinpaloalto did help me a lot by pointing out the following: “How far have you gotten in the DLS series so far? It seems like the techniques in Sequence Models (Course 5) are applicable to finding patterns in inputs that change over time. Have you considered that approach? If you have not yet taken Course 5, it might be a good idea to have a look at that. I know that people have developed systems that can recognize musical tunes or even generate music based on RNN techniques (which are introduced in Course 5).”

I think there is more to be done specially for RNN…

I haven’t posted more updates since I have been working on getting through the last part of course 4 ( Convolutional Neural Networks) and finally starting to review week 1 of Recurrent Neural Networks.

Wanted to know, working towards the solution of the problems I have highlighted here, does anyone has a nice clean tutorial of using Tensor Flow for pattern recognition?

I have found this interesting paper.

“Current research technologies are focused, principally on deep neural network architectures that collect spatial data from sEMG signals. The main purpose of this paper is, to implement recurrent neural network (RNN) model based on long-term short-term memory (LSTM), Convolution Peephole LSTM and gated recurrent unit (GRU), which used to train sEMG benchmark databases, and find the correlation between the input (sEMG) and outputs (gesture)”

I hope that there is something related to this in the following weeks of week 5.

1 Like

I’m bookmarking this discussion and lurking here because this is a field that interests me.