Week 2 practice programming test failures

I am trying to implement the loss function with the simple code loss=np.abs(y-yhat) but the test cell tells me I’m doing something incorrectly. Here’s the full cell:

GRADED FUNCTION: L1

def L1(yhat, y):
“”"
Arguments:
yhat – vector of size m (predicted labels)
y – vector of size m (true labels)

Returns:
loss -- the value of the L1 loss function defined above
"""

#(≈ 1 line of code)
# loss = 
# YOUR CODE STARTS HERE

# moderator edit: code removed

# YOUR CODE ENDS HERE

return loss

and the error code:

L1 = [0.1 0.2 0.1 0.6 0.1]
Error: The function should return a float.
Error: Wrong output
0 Tests passed
2 Tests failed

AssertionError Traceback (most recent call last)
in
3 print("L1 = " + str(L1(yhat, y)))
4
----> 5 L1_test(L1)

~/work/release/W2A1/public_tests.py in L1_test(target)
216 ]
217
→ 218 test(test_cases, target)
219
220 def L2_test(target):

~/work/release/W2A1/test_utils.py in test(test_cases, target)
24 print(‘\033[92m’, success," Tests passed")
25 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
—> 26 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))

AssertionError: Not all tests were passed for L1. Check your equations and avoid using global variables inside the function.

The loss values are supposed to be scalars, right? But your answer is a vector. Take a look at the mathematical formula again: that’s what that big \Sigma there means. That’s the mathematical symbol for the sum.

So you got the first step right: you took the absolute value of the differences of the corresponding elements of the two vectors. Then the second step is that you need to add them up.

I went back and looked at that section of the notebook again for the first time in a while and it really doesn’t say much in detail about the losses and does not specifically make the point that the results of both the L1 and L2 loss are scalars. So you have to deduce it from the implications of the math formulas here: we take two vectors \hat{y} and y and compute a loss value for each element (either the absolute value of the difference in the L1 case or the square of the difference in the L2 case). That gives us a vector of the same configuration as the input vectors. Then we add up the elements of the resulting “vector of losses” to get the total loss. Note that when we get into the real assignments, the loss functions will be the mean of the loss values in the vector version of the loss. Stay tuned for more on this as we learn about Logistic Regression and Neural Networks in the rest of Course 1.

thank you so much Paul! That worked completely. Now that I know what that there is a difference between scalar and vector data I’ll keep a better eye on that. Thank you for getting back to me so quickly.

Glad to hear that it helped. I guess my interpretation would be that the more important thing to watch is making sure you understand the meaning of the math formulas. Everything we’ll be doing here is translating math formulas into linear algebra operations and then turning those into python code. So the critical first step is making sure you understand what the math means.