# Week 2 Exercise 5 correct answer but still getting assertion error

Hi there,
I seem to be getting the correct answer but I still get an assertion error. Any advice?

My answer (includes some of the following code for checking):

{moderator edit - solution code removed}

w = np.array([[1.], [2]])
b = 1.5
X = np.array([[1., -2., -1.], [3., 0.5, -3.2]])
Y = np.array([[1, 1, 0]])

{moderator edit - solution code removed}

(1, 3)
(1, 3)
[[0.15900538]]
(2, 1)
(1, 1)
[[ 0.25071532]
[-0.06604096]]
[[-0.12500405]]
UNQUOTE

Then the error I get:

## [[0.15900538]]

AssertionError Traceback (most recent call last)
in
6
----> 8 assert grads[“dw”].shape == (2, 1)
10

AssertionError:

You have the formula for dw backward. Please compare to the math version of the formula shown in the instructions. The error from the test cell is telling you that your dw value is the wrong shape. Note that the following is a mathematical identity:

(A \cdot B)^T = B^T \cdot A^T

Thank you for the reply… I thought that was the issue so I had already included: dw=np.reshape(dw,(2,1)) but it still gave me the error.
I’ve switched to dw=1/m*np.dot(X,(A-Y).T) following the formula and still same assertion error. Bit lost! As it appears to be correct when I print it.

Ok, so where do things stand at this point? Did you figure out what your problem was? Or do you still have some issues?

I’ve narrowed down the error to being caused by this part…
which I can’t edit as it’s in the second piece of code. Is the np.float another way to convert from array to a value? I’m not sure why it’s throwing up the assertion error.
If I use db = np.squeeze(np.array(db)) instead to convert from an array to a value the same way that was done for the cost I get the correct answer, but I can’t include this in the final uneditable code.

The bias value b here in Logistic Regression is a scalar. So the gradient db should also be a scalar. If you just implement the formula as they show you in the instructions by using np.sum, you will end up with a scalar. The arrays A and Y are both (1, m) row vectors. If I use np.sum to sum a row vector, I get a scalar. Watch this:

np.random.seed(42)
R = np.random.randn(1,4)
print(R)
print(R.shape)
rs = np.sum(R)
print(f"rs = {rs}")
print(f"type(rs) = {type(rs)}")
[[ 0.49671415 -0.1382643   0.64768854  1.52302986]]
(1, 4)
rs = 2.5291682463487657
type(rs) = <class 'numpy.float64'>


Just on general principles, if the test code fails, the solution is not to change the test code. The solution is to figure out why your code causes the test to fail. That’s why they made the test code not modifiable.

I ran into the exact same issue.
I was getting the correct values, but I was facing issues with the data type assertion. I tried to squeeze the array, but it still gave me the same error.

db = np.sum((A - Y), axis=1, keepdims=True) / m (This was giving me an array with a value in it, so I tried to squeeze it like they did for the cost function)

When I removed the parameters ‘axis’ and ‘keepdims’ the code started working fine.
I am a beginner in Python. Can someone throw some light on what was going wrong?

Hi, Ashish.

Print the type of the result in both cases. Add a line like this:

print(f"type(db) = {type(db)}")

What you will find is that if you leave the axis and keepdims = True there, the result is an np array. Even after you squeeze it in that case it’s still an np array, but with only one dimension instead of 2. If you omit that, then the result is a scalar. Because b is a scalar in the case of Logistic Regression, you want db to also be a scalar. That’s what that assertion is checking. Note that the bias values will no longer be scalars once we “graduate” to real Neural Networks in Week 3.

In the case that it’s an array, you can also print the shape:

print(f"db.shape = {db.shape}")

then watch the difference that adding the squeeze makes. Without the squeeze, the shape will be (1, 1), which is a 2D array with one element. With the squeeze it will be (1,), which is an array with literally one dimension, but still an array. Read the documentation for numpy squeeze to understand more. The point is that it removes “trivial” dimensions, but the result is still an array.