Expected output off on week 3 practice lab

For this step in the week 3 practice lab:

Compute and display gradient with w initialized to zeroes

initial_w = np.zeros(n)
initial_b = 0.

dj_db, dj_dw = compute_gradient(X_train, y_train, initial_w, initial_b)
print(f’dj_db at initial w (zeros):{dj_db}’ )
print(f’dj_dw at initial w (zeros):{dj_dw.tolist()}’ )

I get these results:

dj_db at initial w (zeros): [-0.12009217 -0.11262842]
dj_dw at initial w (zeros): [-12.00921658929115, -11.262842205513591]

Whereas the expect output is

dj_db at initial w (zeros) -0.1
ddj_dw at initial w (zeros): [-12.00921658929115, -11.262842205513591|

And that seems to be producing errors when I run the next cell for cost and gradient with non-zero w.

Perhaps you are dividing by m inside a for-loop, where it should be outside.

1 Like

@shanup any suggestions for me?

Hello @brooksjordan

Right off the bat, why is there a dimension mismatch in dj_db between actual output and expected output - (2 vs 1)?

1 Like

Yep that’s my question.

I’m not sure …

This happens if your dj_db is inside the for loop and you have done dj_db[j]…should you be doing that?

1 Like

@brooksjordan

Could you DM your notebook to me and i will point out the problem area.

1 Like

Just documenting that this:

dj_db = dj_dw / m

should have been this:

dj_db = dj_db / m

… one letter off.

Thanks for the help @shanup