W2_A1_Ex-8_ Implementing L1 and L2 loss functions

Tks for any help

The first rule is: always trust the error. If the error says “Not all tests passed for L1. Check your equation…” Go ahead and check that equation. You have to implement the below equation:
\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^{m-1}|y^{(i)} - \hat{y}^{(i)}| \end{align*}

First, you have to find the absolute difference between y and y-hat and then sum all the values.

Hint:
Use abs and np.sum

Best,
Saif.

I do not understand your hints
I used for L1
loss = np.sum(y, dtype=float)
and it gives me the global variable error

And I used for L2
loss = np.sum(np.abs(yhat-y), axis=0)
and it gives me the global variable error

Why you are using this? Can you please explain? And, is it same as equation of L1 given to you?

You mean this for L1
loss = np.sum(np.abs(yhat-y), axis=0)

and for L2 the square?

Yes, but we don’t need to mention any axis. You can skip that.

But it is y - yhat.

And for L2?

loss= np.dot(np.abs(yhat,y)

L2 doesn’t require the abs but you need to square the difference. Formula is:
\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^{m-1}(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}

In this formula of L1 where can I read the np.abs ?

Which is the sign that indicates the absolute value the Summatory?

How can I distinguish from L1 and L2 formulas when I have to use the absolute values?

DIdi I make myself understood?

In the below formula
\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^{m-1}|y^{(i)} - \hat{y}^{(i)}| \end{align*}
These two lines indicate the abs: || . This is just a basic mathematical notation. If you are not aware of the basic math, you should do this specialization, before any Machine or Deep learning.

Furthermore, if you are new to Python, you should learn the basics of Python before taking any of these courses.

It was simply resolved by a single explanation as You do right now…

Hi,

Just to piggy-back on this:

I understand we can use the following to calculate the L2 norm:
loss = np.sum((y-yhat)**2)

But how do we use the np.dot() function as stated in the question?
Nothing I tried intuitively worked when trying to use np.dot() to solve this.

What does dot product do? It takes two vectors of the same size and then multiplies each of the corresponding elements of the two vectors. Then it adds up the products to get a scalar value which is sum.

In our case what we want is the sum of the squares of the elements of a vector, right? So what if we “dotted” it with itself? I’m talking about the “difference” vector of course.

1 Like