C1_W2: Logistic Regression on m examples. (Error?)

As I was watching the Lecture, Professor Ng wrote a code for performing Logistic Regression on m examples considering two features.
He calculates ‘z’ in the first line of the for loop. He has written the following code:
z = wT*x(i) + b

Since we are considering two features, shouldn’t the code be:
z = (w1T)*x1(i) + (w2T)*x2(i) + b

Rest of his codes written for w1, w2, dw1, dw2, b and db are fine.

Hello @Nishok

Exactly because there are two features, when x is not subscripted, it is a matrix of two elements (x_1 and x_2). Similarly, exactly because there are two features, when w is not subscripted, it is a matrix of two elements (w_1 and w_2). Note that we have a T symbol for w which means Transpose, and we use transpose only in a matrix.

Raymond

So…Is it a mistake in his slide?
Because I see in a different slide (not the code), he wrote:
z = (w1)*x1 + (w2)*x2 + b

They are just two ways of expressing the same thing. In matrix, or in scalars. Even in the slide you just shared, in the first line, the matrix form is used. Do you know we can express it in two ways?

Yes I am aware that we can express it in two ways, I was just confused with why he used the matrix representation at the beginning of the for loop but then calculated dw1 and dw2 separately rather than using the matrix form. It took me a while to realize that we are not using vectorization so it’s not possible here.

Well now I understand what is going on.

I see. Thank you for letting us know, @Nishok!

It’s partially vectorized.

In the fully vectorized version, we don’t even need the loop over samples, but we have used it. This is the first place where we didn’t vectorize. The second place is, as you said, the calculation of dw_i. It is not vectorized but explicitly shown to us what’s happening in each of those features. The only vectorized part is the computation of z^{(i)}.

However, I think you have already known all of the above. That’s great

Cheers,
Raymond