W1 Optional Logistic Regression: Gradient descent - wrong values/output

When check the gradient descent i notice that the cost decreases after each iteration but the output still does not match the expected result. I get the error message

Wrong output for the loss function. Check how you are implementing the matrix multiplications.
Wrong values for weight’s matrix theta. Check how you are updating the matrix of weights.

# UNQ_C2 GRADED FUNCTION: gradientDescent
def gradientDescent(x, y, theta, alpha, num_iters):
    '''
    Input:
        x: matrix of features which is (m,n+1)
        y: corresponding labels of the input matrix x, dimensions (m,1)
        theta: weight vector of dimension (n+1,1)
        alpha: learning rate
        num_iters: number of iterations you want to train your model for
    Output:
        J: the final cost
        theta: your final weight vector
    Hint: you might want to print the cost to make sure that it is going down.
    '''
    ### START CODE HERE ###
    # get 'm', the number of rows in matrix x
    m = len(x)

    for i in range(0, num_iters):
        
        # get z, the dot product of x and theta
        z = np.dot(x, theta)
        loss = z - y
        
        # calculate the cost function
        J =  (-1/m) * np.sum(loss)
        print("Iteration %d | Cost: %f" % (i, J))#
                
        gradient = np.dot(x.T, loss)/m
        
        #update theta
        theta = theta - (alpha * gradient)
        
        
    ### END CODE HERE ###
    J = float(J)
    return J, theta

I still couldn’t get the given error messages. Any tips to solve this please?

Thanks!

Hi @Chenchela

You got the calculation of z correct but further on you modified the code too much. You are missing some code and hints here and it would be hard to recover from here. I would suggest to save your previous work and start over.
Check out the section “Refresh your Lab Workspace”:
https://www.coursera.support/s/article/360044758731-Solving-common-issues-with-Coursera-Labs?language=en_US