Regularization - L1 and L2

Hello, sorry if my english is not good and my question dont have sense but i have a question to ask because i dont understand regularization.
Why is the difference to use norm in L1 and norm square in L2 ? I know norm is about a vector size, but i dont understand what is the concept to use norm square. Thanks for the community, i learn to much reading all the questions !!

  • coursera-platform

@BrunoRibeiro

is your doubt more about why l2 uses sum of squared model weights?

Yes !

as you know l2 regularization always returns to a non-zero weight values and penalizes larger values for their model coefficient. So the reason for using squared norm is here better optimization and allowing smaller coefficient values close to zero but not zero, allowing models ability to find multicollinearity spread of data points.