You’re missing an operator here:
The transposes are wrong in the first term with the np.dot
. I thought we had already covered that issue earlier on this thread. In order to understand the point, please read what I said and also it is essential that you follow the link I gave in that earlier post and understand the points that are made on that other thread.
Hi @paulinpaloalto
i have read the earlier thread, and as per my understood you can Transpose Y with dot product and keep np.sum. OR Transpose A then no need for np.sum .Which option is the correct ?Please your support the image is not clear
Please read this post. It is not correct to transpose Y. That post demonstrates why that is wrong and why it matters which argument you transpose.
That post gives a very concrete demonstration by doing the computation both ways and showing that the answers are different.
I read this article again, and i have been transposed AL instead Y as per you recommendation but still not fix second test and keep failed PFB:
The cost should be a scalar or a 1 x 1 np array, but yours is a 3 x 3 array. So you must be doing something wrong. In your previous version, you had two terms: the Y = 1 term, which you did with dot products and the Y = 0 term, which you did with elementwise multiply followed by np.sum. In the elementwise version, you don’t need any transposes, right?
My recommendation (which I also made earlier in this thread) is that you be consistent and use the same coding with both the Y = 1 and Y = 0 terms.
Observations:
If you have two vectors, with sizes (1 x K) and (K x 1).
Then, there are two possible dot products:
- (1 x K) \cdot (K x 1) gives a (1 x 1) result (a scalar).
- (K x 1) \cdot (1 x K) gives a (K x K) matrix result.
Salam @paulinpaloalto ,
Thank you for your effort and solved my problem by make the calculations on paper/Pencil .
Salam @TMosh ,
Thank you for your observations it was really good hint .