Will only low weights aim to zero with regularization

About this slide, do I understand correct, that w will aim to zero and grade neuron influence only in cases, when this w was low before? So, when W was high neuron will left and regularization only will reduce w a bit in this neuron? But it was presented like something that is firstly for high weights. So, can you clearlify, please, this formula?

This is not increasing the W. Our goal is to minimize the cost, J_{(W^{[l]},b^{[l]})}. No matter what W is, positive or negative, small or large or zero, our aim is to minimize the cost. If minimizing the W can minimize the cost, our model will minimize the W. However, if maximizing the W will minimize the cost, our model will maximize the W.

Best,
Saif.

Exactly! I understood the same. But in videos and tests there is enough big acent that this regularization specially reduces W. And nothing about that it can increase it too.

Regularization is different from cost. Check my reply to your different post.