### 2.1 Implement the L1 and L2 loss functions

### Exercise 8 - L1

Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.

**Reminder**:

- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions (π¦Μ π¦^) are from the true values (π¦π¦). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:

πΏ1(π¦Μ ,π¦)=βπ=0πβ1|π¦(π)βπ¦Μ (π)|(6)(6)πΏ1(π¦^,π¦)=βπ=0πβ1|π¦(π)βπ¦^(π)|

In [48]:

# GRADED FUNCTION: L1

β

def L1(yhat, y):

ββ"

Arguments:

yhat β vector of size m (predicted labels)

y β vector of size m (true labels)

Returns:

loss β the value of the L1 loss function defined above

ββ"

#(β 1 line of code)

# loss =

# YOUR CODE STARTS HERE

# YOUR CODE ENDS HERE

return loss

In [49]:

yhat = np.array([.9, 0.2, 0.1, .4, .9])

y = np.array([1, 0, 0, 1, 1])

print("L1 = " + str(L1(yhat, y)))

β

L1_test(L1)