Yes, it is a good point that “@” is also available as an “overloaded” operator for np.dot, just as * is equivalent to np.multiply. For whatever reason, Prof Ng just doesn’t seem to use @ in that way. One reason may be that things get quite a bit more complicated once we switch to using TensorFlow, which happens first in Course 2 Week 3. That introduces another level of complexity, in that you can mix TF and numpy operations, so the question is which of the possible interpretations of @ or * is going to take precedence in a given situation. This is just my conjecture, not based on any knowledge of Prof Ng’s actual thinking on this topic, so it’s probably worth exactly what it cost you.
I had assumed that Prof Ng did not use @ or any symbol to show matrix multiplication since the most common convention is to use juxtaposition for standard matrix multiplication, be it with a scalar, vector, or another matrix when dimensional analysis is satisfied (I’ll have to wait until tensors are discussed in TF, although I am also presently trying to cover them in Deep Learning, Goodfellow et al. as much as possible).
I am not sure if you were speaking of the order of precedence with respect to @ and *, but my humble guess is that @ would have higher precedence because matrix multiplication is non-commutative in the general case, whereas * is always commutative for element-wise operations. But you did say that both operators are overloaded, so I’ll be sure to be very attentive to their possible interpretations you alluded to.
Yes, I was only referring to @ as an operator in python, not as mathematical notation. Prof Ng does use the common convention in math notation that no explicit operator is required for matrix multiplication.