Problem in week 3 practical lab

Hi,

I code cost function but it passes the first test but is unable to pass test cases. kindly help me understand what is wrong with this code. I will be very grateful to you.

I am going to share the code and error with you.

code Here:

GRADED FUNCTION: compute_cost

def compute_cost(X, y, w, b, lambda_= 1):
“”"
Computes the cost over all examples
Args:
X : (ndarray Shape (m,n)) data, m examples by n features
y : (array_like Shape (m,)) target value
w : (array_like Shape (n,)) Values of parameters of the model
b : scalar Values of bias parameter of the model
lambda_: unused placeholder
Returns:
total_cost: (scalar) cost
“”"

m, n = X.shape

### START CODE HERE ###

  # moderator edit: code removed
### END CODE HERE ### 

return total_cost

Error:

Cost at test w and b (non-zeros): nan

AssertionError Traceback (most recent call last)
in
8
9 # UNIT TESTS
—> 10 compute_cost_test(compute_cost)

~/work/public_tests.py in compute_cost_test(target)
24 b = 0
25 result = target(X, y, w, b)
—> 26 assert np.isclose(result, 2.15510667), f"Wrong output. Expected: {2.15510667} got: {result}"
27
28 X = np.random.randn(4, 3)

AssertionError: Wrong output. Expected: 2.15510667 got: 1.7461071120044385

Expected Output:

Cost at test w and b (non-zeros): 0.218

Hi @Muhammad_Abrar_Hussa ,

z_wb has to be set to zero before looping through all the features. Please check the Hints, it explains how this is done.

1 Like

Please don’t post your code on the forum. Sharing your code is not allowed by the community standards. Posting any error messages or asserts are fine. A screen capture image is the best method.

I have edited your post to remove the code.

If a mentor needs to see your code, we’ll ask you to send it via a private message. Usually we don’t need to see your code, because the errors and asserts provide a lot of clues.

Kic Hi!

I am getting this error for compute_cost(for logistic regression) for non-zeros test - Exercise 2

I have initialized z_i = 0 at the start of the loop.

<>
Cost at test w and b (non-zeros): 0.219

AssertionError Traceback (most recent call last)
in
8
9 # UNIT TESTS
—> 10 compute_cost_test(compute_cost)

~/work/public_tests.py in compute_cost_test(target)
24 b = 0
25 result = target(X, y, w, b)
—> 26 assert np.isclose(result, 2.15510667), f"Wrong output. Expected: {2.15510667} got: {result}"
27
28 X = np.random.randn(4, 3)

AssertionError: Wrong output. Expected: 2.15510667 got: 2.75932426268233

Expected Output:

Cost at test w and b (non-zeros): 0.218

Hi @shamsheer_ahmed ,

It looks like when the bias term, b, is added to z_wb, it is done within the loop for the feature calculation.
The bias term should only be added at the end of the block of code for the feature calculation, because all the features for 1 training example is now done.
Have a look at the Hints, it shows where the bias term is added to z_wb.

@Kic Solved…Thank you

@Kic
One more issue, this time with dj_db while calculating gradient descent. d_db is -0.2 instead of -0.1

dj_db at initial w and b (zeros):-0.2
dj_dw at initial w and b (zeros):[-12.00921658929115, -11.262842205513591]

Expected Output:

dj_db at initial w and b (zeros) -0.1
dj_dw at initial w and b (zeros): [-12.00921658929115, -11.262842205513591]

Hi @shamsheer_ahmed ,

Looping over each feature should have only 2 lines of code, however, from what you have posted here, I can see that this loop includes lines belong to the first ‘For’ loop code block. You need to move these lines out to the first code block by removing the indentation, 4 spaces to the left; so that these lines are inline with the code block for the second ‘For’ statement.

Here is a link to some very good resources on learning Python and useful tips on debugging etc, which you may find helpful.

@Kic
Thank you a ton. Now, dj_db outputs fine, but dj_dw gets an error. Please see below.

dj_db at initial w and b (zeros):-0.1 dj_dw at initial w and b (zeros):[0.0, -11.262842205513591]

Expected Output:

dj_db at initial w and b (zeros) -0.1
dj_dw at initial w and b (zeros): [-12.00921658929115, -11.262842205513591]

Hi @shamsheer_ahmed ,

Check out the Hints, it should show you where the block of code should be for calculating dj_dw.

@Kic
Resolved that issue. But the non-zero implementation for w, b returns this error!

dj_db at test w and b: -0.6
dj_dw at test w and b: [-44.83135361795273, -44.37384124957207]

AssertionError Traceback (most recent call last)
in
8
9 # UNIT TESTS
—> 10 compute_gradient_test(compute_gradient)

~/work/public_tests.py in compute_gradient_test(target)
51 dj_db, dj_dw = target(X, y, test_w, test_b)
52
—> 53 assert np.isclose(dj_db, 0.28936094), f"Wrong value for dj_db. Expected: {0.28936094} got: {dj_db}"
54 assert dj_dw.shape == test_w.shape, f"Wrong shape for dj_dw. Expected: {test_w.shape} got: {dj_dw.shape}"
55 assert np.allclose(dj_dw, [-0.11999166, 0.41498775, -0.71968405]), f"Wrong values for dj_dw. Got: {dj_dw}"

AssertionError: Wrong value for dj_db. Expected: 0.28936094 got: 0.4225095475509334

Expected Output:

dj_db at test w and b (non-zeros) -0.5999999999991071
dj_dw at test w and b (non-zeros): [-44.8313536178737957, -44.37384124953978]

Hi @shamsheer_ahmed ,

You should have all the tools you need to debug this one.
What could be the problem?
The assertionError is complaining your output for dj_db is not as expected. Why your output for dj_db is more that the expected value? What is the extra bit that added to dj_db?

@Kic
I apologize, no luck debugging that part related to dj_db! Wasted a lot of hours over past 4 days:)

What did you do? Do you understand what the code is required to do? Does the implementation instruction and the hints work for you? If not, what is it that you don’t understand?

It is supposed to be simple Kin at the end of the day, a computation of gradient descent. I am not able to figure why the non-zero test fails. The code looks fine. I went back to the original class lab a few times to see what am I doing wrong! Appreciate your help to close this issue:)

Send me a screenshot of the function, not copy and paste of the code, by direct message. My guess is that the code blocks are not in the right place. Ie. indentation problem.

Hi @shamsheer_ahmed ,

The problem is indentation in the wrong place. There are a few areas that need attention:

  1. Python indentation is only 4 spaces per level. Those variables and ‘for’ statement in red are telling you that the indentation spaces are incorrect. So the indentation for those places need to be changed to just 4 spaces.
  2. The bias term, b should only be added to z_wb outside of the ‘for’ loop, because the bias term is not added per feature. It should be at the 2nd level of indentation.
  3. f_wb and err_i should be at the 2nd level of indentation. dj_db += err_i should be right after the err_i calculation, and is at the 2nd level of indentation.
1 Like

Kin, You are amazing, it took me 30 seconds to fix it! Thank You. You made my day:) God Bless You :pray: