Why is my MSE cost going up?

Hi guys I’m one week into my transition into a ML career.
I have decided to implement a linear regression model in javascript.
https://github.com/monstercameron/ML-scratchpad <— Repo here

Most of the code was written by chatGPT and I just modified the prompts and tweaked the code to all work together.

I have a rough idea of a univariate linear regression model: features and labels in, start the WandB at 1, make a prediction, calculate the MSE, gradient descent(still shaky on this concept), readjust the WandB of the model, repeat until MSE low enough/convergence.

The issue is my MSE keeps climbing even when I adjust the learning rate to a very small number. Any ideas?

1 Like

Hello @monstercameron,

I think your code doesn’t implement the gradient descent correctly. Please check this out:


PS: I didn’t read all of your code but I suggest you to make sure they are correct.

Thanks for the hint, I think it’s working now

The plot is a bit rough, scaling isn’t working properly.

This was the original code to calculate gradient descent

        // Update the model parameters using the gradient
        w = w - alpha * (1 / X.length) * error.reduce((sum, current) => sum + current, 0);
        b = b - alpha * (1 / X.length) * error.reduce((sum, current) => sum + current, 0);

Using chatGPT, I broke out the code into a more verbose format so I can read and understand better.

const calculateGradient = (X, y, w, b) => {
    // console.log("line 66", w);
    const m = X.length;
    return X.reduce((accumulator, x, j) => {
        // console.log("line-69", j, accumulator, x, y[j], w, b);
        const predictedValue = predict({ w, b }, x);
        const difference = predictedValue - y[j];
        const product = difference * x;
        // console.log("line 73", predictedValue, difference, product, x);
        return accumulator + product;
    }, 0) * (1 / m);

Here I can see how it matches up better to the screenshotshot above.

1 Like

How can I verify that my predictions are accurate?

Hi @monstercameron

I would suggest a residual analysis with the metric that describes your business problem in the best way and working with:

  • a training set,
  • (a validation set) - in brackets since you seem to have ~25 labels if I interpret your last plot correctly and assumed is it complete concerning your labels
  • and a test set (that was never seen by the model before).

I believe here you will find a thread that describes how to do that:

see also this repo.

Hope that helps!

Best regards

I was only taking out a single feature from the dataset so I could start learning and understanding the concepts in code. I am grabbing 5% random samples for my validation set and removed the validation samples from the training set.

// Sample 5% of the data as a validation set
    const sampleSize = Math.round(X.length * 0.05);
    const sampleIndices = sample(X.length, sampleSize);

    const validationX = sampleIndices.map(i => X[i]);
    X = X.filter((_, i) => !sampleIndices.includes(i));

    const validationY = sampleIndices.map(i => y[i]);
    y = y.filter((_, i) => !sampleIndices.includes(i));

I will take a look at the link you sent me thanks!

maybe I’m also jumping the gun as I just started my second week…

I think you ought to complete Course 1 and learn the basics from the course materials.