Neural Network linear regression model is not able to improve it's accuracy

I have created neural network linear reression model for house price prediction dataset and getting accuracy between 4.7522e-05 to 8.1926e-05 based on dfferent hyperparameter. but every time i train model using training dataset i am getting same accuracy even though MSE is getting reduced. I tried to tune hyperparameters but no luck. I am not getting why my accuracy is not improving and getting same value in each epoch.

Github link → Linear_reg.ipynb

Please move your topic to the correct subcategory.
Here’s the community user guide to get started.

Posting code related to a course assignment in public is discouraged and could get your account suspended. It’s okay to leave stacktraces in public though.

If your question belongs to the General category, consider setting the notebook permission to allow viewing access to those who have the link.

Moved topic to general discussion category.

Your notebook lacks viewing permissions for those with the link.

Hello Priyanka,

Can you elaborate on your hyperparameters and how tuned it?

For others to give any idea about how you could improve accuracy of your model, one needs to know

  1. Your dataset/model in detail and how dataset was split. If you have used the same house price prediction dataset, then what other changes you made in your model.
  2. What additional features you added.
  3. if any additional polynomial features were added?
  4. Based on the above pointers how did your regularization parameter was assignment.

That will get you to know how you could address the issue.

Regards
DP

Hello Deepti,

Thanks for your help in advance.

Github link → Linear_reg.ipynb

For hyper-parameter tuning i have tried below things-
1] I have done preprocessing on house price dataset before training a model.
2] I tried different numbers neurons and hidden layer to improve accuracy.
3] I tried different optimizers like adam, RMSPROPS, Adagrad and batch size, epochs.
4] I also tried to apply l1 and l2 regularization and i have tried with no regularization.

Despite above hyperparaeter tuning accuracy is same.

The accuracy values you cite are rather small numbers (around 4 to 8 times 10^{-5}).

Do you expect your model should perform better than that?

Regularization is going to increase the training cost, but decrease the validation and test set cost. Do you have numbers for these factors?

The model is compiled with accuracy as the metric.

model.compile(
    loss=mse,
    optimizer=adam,
    metrics=['accuracy']
)

This is incorrect since the metric is computed considering that the problem is a classification problem. For regression, use metrics like MSE / MAE. Here’s an example

No. I have applied regularization but the training, validation and test cost is same that i got without applying regularization. There is no change in cost.

Then I suspect something in your model is broken.

Hello Priyanka,

I haven’t got the chance to look at your assignment book. I am travelling. Will let you know by tomorrow once I have a look at your model.

Based on what both mentors Tom and Balaji have mentioned, you surely cannot use both the regularisation L1 and L2 for the same model. You need to choose one of them based on what kind of analysis and classification you are handling. Same goes for metric, although I don’t know what is your analysis but from what I remember housing pricing prediction has one factor multifactorial giving a price range analysis, so the metric applied is incorrect as told by Balaji.

Regards
DP

You can apply L1 & L2 regularization simultaneously on a layer.

Thanks.

After replacing metrics with mse its working.
I am able to tune the hyper-parameters and getting expected result.