Well this is just a terminology question. In math when they say “dot product”, they only mean an operation between two vectors. And in math the vectors don’t have “orientation”: they are not row vector or column vectors. They are just “vectors” with a given number of elements. If they have the same number of elements, then v \cdot w will always be a scalar.
Then in numpy they get a little sloppy and np.dot is really implementing full matrix multiply, not just dot products, but they call it “dot”. But the key mathematical point there is that the “atomic” operation of matrix multiply is a dot product between one row of the first operand and one column of the second operand, right? So dot product is what is going on there, but if the operands are matrices instead of vectors then there are lots of individual dot products being computed which form the scalar elements of the output matrix or vector.
You have to understand what the math formula means that you are trying to implement and you have to understand what the operations like transpose and dot product mean. Here’s a thread that addresses a slightly different question (how do I know whether to use elementwise multiply or dot product multiply) that is worth a look just for the conceptual point I’m making here.