I got a little confused about how we calculate our activation functions in Python. In our class example, we create a matrix W, where each column represents the weight vector for each neuron. We then compute the matrix multiplication transpose(X)@W+B. However, I found a different definition of neural network matrix multiplication from other sources: a=transpose(W)@X+B. I’ve provided the example image below.
There is no universal standard for how the examples are oriented (as rows or as columns). Therefore the orientation of the W matrix has to be appropriate for the X matrix.
You may find many different implementations.
Both ways are fine. Mostly it depends on how the matrices (weights & input_data) are managed.
Try to be consistent with which ever way you choose.
2 Likes