Here Andrew uses the vectorization on the logistic regression function to show how the vectorized approach is 300 times faster than the non-vectorized approach.
The non-vectorized approach is a O(n) time complexity. I am curious to know how does np.dot() work under the hood making it so much faster and better than O(n).

Generally speaking you have to have quite a few loops to go through the matrix to multpiply elements with each other when using for loops, but when using matrix multiplication you could take the whole row and multiply with respective columns and sum them up i.e. element wise correspondence.

You could search in google to find the code for numpy dot product!

Thanks for the reply @gent.spah
I checked out the article you linked… though its good for understanding the function it doesn’t talk about the time complexity in Big O terms.
Actually I come from a background of using C++ and Java since over 5 years now for solving leetcode questions as well as backend development at my company. A lot of focus there was on making the logic as time efficient as possible and a common method to check that was by analysing the code in terms of Big O time complexity. I feel that aspect is missing in Python. Would it be possible to do any of the implementation that we’ve done so far in C++ or Java?

Yes, python is also analyzed in terms of time complexity similar to every programming language. You can search in google to find if possible the dot product implementation and derive its time complexity. I gave you my understanding above!