Hi everyone
I have been trying to build a one-dimensional convolutional network, however, I am not understanding the results obtained and how to configure the model variables. Follow my model:
model = keras.Sequential([
Conv1D(filters=2, kernel_size=32, activation='relu', input_shape=(120, 3)),
# Camadas de pooling
MaxPooling1D(pool_size=3),
# Camadas de normalização
BatchNormalization(),
# Camadas densas
Flatten(),
Dense(units=1, activation='linear', kernel_regularizer=regularizers.l2(0.0010))
])
my intention is to make the model learn from data collected by an accelerometer, containing the axes of accelerations (X, Y and Z). With this, it is possible to predict, using concepts of linear regression, powers produced from vibrations, collected by these devices. And the results are these:
It seems an exciting project, to predict power produced from vibrations.
First, let’s discuss the input data. What kind of data are you using?
Additionally, have you checked if the input data is normalized and preprocessed properly?
It’s also worth noting that the validation loss and mean absolute error seem to be relatively high compared to the training loss and mean absolute error, which could indicate overfitting or a lack of data.
Hello @saifkhanengr thank you for response this topic!
Thanks for the feedback!
Accelerometers collect accelerations of objects, which have axes (accX, accY and accZ) - Raw data, so here we have a set of data which are labeled with power value (Y).
During data exploration, we examine and treat the data with: moving average, moving kurtosis, and moving variance. I initially started using the raw data (x, y and z) in an SVR Model; in performance analysis, I noticed a Mean Squared Error: 0.0256, but when plotting it on the graph, it was having huge ranges between one power and another.
That said, I started using these other resources (furniture), with the same result.
So the idea now is to try to use a convolutional neural network to learn and see what happens.
Hello @tfprenan! Thanks for sharing more details about your project.
So, you have preprocessed the data using moving average, moving kurtosis, and moving variance techniques. Great.
This seems like an overfitting problem to me (lack of data). How many samples of data do you have?
To improve the model’s performance, I suggest trying different hyperparameters such as the number of filters, kernel size, and learning rate. You can also add more layers to the network, such as additional convolutional and dense layers, to capture more complex relationships between the input and output data.
But, before we dive into hyperparameter tuning, it’s essential to have an appropriate amount of data. The larger the dataset, the better the model will be able to learn and generalize.