Can somebody tell me why this code can't be used in deep learning

I’m not sure what your question is. If you mean that you think that is the complete solution for “update parameters”, then the answer is that you have omitted the “learning rate” \alpha. You can’t just directly apply the gradient, which effectively assumes that \alpha = 1. That may cause overshooting. The value of \alpha is a “hyperparameter” (meaning a value you simply have to choose, rather than being able to automatically learn) and it needs to be chosen carefully. Prof Ng spends quite a bit of time talking about how to intelligently choose hyperparameter values in DLS Course 2 (mostly in the first week).

Or if the question is why the “-=” construct can’t be used and not about \alpha, then the answer is that it can be used, but there are some subtleties you need to be aware of with the way that object references work in python. Here’s a thread which explains that set of issues.