Assignment 3 - I keep getting zero

Hi

With assignment 3 I keep getting zero despite having done all 6 excercises. I have gone over the coding several times and can’t find the problem.

Can someone please assist me, I can’t proceed if I don’t pass this assignment.

Kind regards

Gideon Slabbert

Please share the error (or feedback) with us. what message you see when you click on “Show grader output”?

i got 0% every time

When you received 0, the grader also gives you the reason why you are getting 0. Please click on “Show grader output” and share that message with us.

Comment line with index: UNQ_C2 wasn’t found in code
i get this

It means you removed some code that you were not supposed to do. Please read this for further guidance.

Cell #UNQ_C2. Can’t compile the student’s code. Error: IndentationError(‘unexpected indent’, (‘/tmp/student_solution_cells/cell_10.py’, 15, 2, ’ def compute_cost(X, y, w, b, lambda_= 1):\n’))

i get this

OK. Good.

i still get 0%

It means your code of compute_cost is not correct. Do not submit your assignment until you get “All tests passed” in your assignment.

Please share the full error which you see in the notebook.

Hi

It’s says Cell#11. Can’t compile the students code. Error: TypeError (‘unsupported format string passed to tuple_format_’)

Here is my code for Exercise 2

UNQ_C2

GRADED FUNCTION: compute_cost

import numpy as np

def compute_cost(X, y, w, b, *argv):
“”"
Compute the cost function for logistic regression and its gradient.

Args:
    X (ndarray): Input data with shape (m, n).
    y (ndarray): Target labels with shape (m,).
    w (ndarray): Weight vector with shape (n,).
    b (float): Bias term.

Returns:
    total_cost (float): Average cross-entropy loss over all training examples.
    gradients (ndarray): Gradient of the cost function with respect to the weights.
"""
m, n = X.shape
loss_sum = 0
gradients = np.zeros(n)  # Initialize an array to store the gradients for each weight

for i in range(m):
    z_wb = np.dot(X[i], w) + b
    f_wb = 1 / (1 + np.exp(-z_wb))
    loss = -y[i] * np.log(f_wb) - (1 - y[i]) * np.log(1 - f_wb)
    loss_sum += loss
    
    # Compute the gradient of the loss with respect to each weight
    for j in range(n):
        gradients[j] += (f_wb - y[i]) * X[i][j]

total_cost = (1 / m) * loss_sum
gradients /= m  # Divide the gradients by the number of training examples to get the average

return total_cost, gradients

For excercise 1 it says all tests pass yet I still get zero and for 2 to 6 there are error messages.

Hi Saif

I posted the above yesterday.

Please help!

Hi @Gideon_Jesse_Slabber! I am sorry for late replying.

I guess this is the new version of the assignment where returns are total_cost, gradients. I have an old assignment notebook (where the return is only total_cost) and I don’t have access to the latest version. Please wait for the MLS mentor to respond to you.

@rmwkwok @Mujassim_Jamal

Hi

Thank you for replying. To be honest I deleted the entire code by mistake and then tried to piece it together.

I don’t think this version is any different to yours .

Oh, I see. Please get a fresh copy of your assignment as described here and then do your code where it is mentioned: “### START THE CODE ### and ### END THE CODE ###”. Don’t change anything else.